Test Report: KVM_Linux_crio 21923

                    
                      0ff1edca1acc03f8c3eb691c9cf9caebdbe6133d:2025-11-20:42417
                    
                

Test fail (15/345)

x
+
TestAddons/parallel/Registry (73.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.574251ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-c76hc" [8341ed8b-de18-404d-9892-7e44cbdd07e3] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003542757s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-f74ln" [735676cf-f787-4c40-aea2-353fd6d6c050] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00409447s
addons_test.go:392: (dbg) Run:  kubectl --context addons-947553 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-947553 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Non-zero exit: kubectl --context addons-947553 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.073493045s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted from default namespace

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:399: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-947553 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:403: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted from default namespace
*
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 ip
2025/11/20 20:26:04 [DEBUG] GET http://192.168.39.80:5000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-947553 -n addons-947553
helpers_test.go:252: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 logs -n 25: (1.435641841s)
helpers_test.go:260: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-838975 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │ 20 Nov 25 20:20 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ -o=json --download-only -p download-only-948147 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ --download-only -p binary-mirror-717684 --alsologtostderr --binary-mirror http://127.0.0.1:46607 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ -p binary-mirror-717684                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ addons  │ disable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ addons  │ enable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ start   │ -p addons-947553 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ enable headlamp -p addons-947553 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ ip      │ addons-947553 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:04.799759    8315 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:04.799869    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.799880    8315 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:04.799886    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.800101    8315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:21:04.800589    8315 out.go:368] Setting JSON to false
	I1120 20:21:04.801389    8315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":215,"bootTime":1763669850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:21:04.801502    8315 start.go:143] virtualization: kvm guest
	I1120 20:21:04.803491    8315 out.go:179] * [addons-947553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:21:04.804816    8315 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:21:04.804809    8315 notify.go:221] Checking for updates...
	I1120 20:21:04.807406    8315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:04.808794    8315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:04.810101    8315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:04.811420    8315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:21:04.812487    8315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:21:04.813679    8315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:21:04.845057    8315 out.go:179] * Using the kvm2 driver based on user configuration
	I1120 20:21:04.846216    8315 start.go:309] selected driver: kvm2
	I1120 20:21:04.846231    8315 start.go:930] validating driver "kvm2" against <nil>
	I1120 20:21:04.846241    8315 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:21:04.846961    8315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:21:04.847180    8315 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:21:04.847211    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:04.847249    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:04.847263    8315 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1120 20:21:04.847320    8315 start.go:353] cluster config:
	{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1120 20:21:04.847407    8315 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:21:04.848659    8315 out.go:179] * Starting "addons-947553" primary control-plane node in "addons-947553" cluster
	I1120 20:21:04.849659    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:04.849691    8315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:21:04.849701    8315 cache.go:65] Caching tarball of preloaded images
	I1120 20:21:04.849792    8315 preload.go:238] Found /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:21:04.849803    8315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:21:04.850086    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:04.850110    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json: {Name:mk61841fddacaf75a98d91c699b32f9aeeaf9c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:04.850231    8315 start.go:360] acquireMachinesLock for addons-947553: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 20:21:04.850284    8315 start.go:364] duration metric: took 40.752µs to acquireMachinesLock for "addons-947553"
	I1120 20:21:04.850302    8315 start.go:93] Provisioning new machine with config: &{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:04.850352    8315 start.go:125] createHost starting for "" (driver="kvm2")
	I1120 20:21:04.852328    8315 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1120 20:21:04.852480    8315 start.go:159] libmachine.API.Create for "addons-947553" (driver="kvm2")
	I1120 20:21:04.852506    8315 client.go:173] LocalClient.Create starting
	I1120 20:21:04.852580    8315 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem
	I1120 20:21:05.105122    8315 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem
	I1120 20:21:05.182169    8315 main.go:143] libmachine: creating domain...
	I1120 20:21:05.182188    8315 main.go:143] libmachine: creating network...
	I1120 20:21:05.183682    8315 main.go:143] libmachine: found existing default network
	I1120 20:21:05.183926    8315 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.184462    8315 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d98350}
	I1120 20:21:05.184549    8315 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-947553</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.190086    8315 main.go:143] libmachine: creating private network mk-addons-947553 192.168.39.0/24...
	I1120 20:21:05.255182    8315 main.go:143] libmachine: private network mk-addons-947553 192.168.39.0/24 created
	I1120 20:21:05.255605    8315 main.go:143] libmachine: <network>
	  <name>mk-addons-947553</name>
	  <uuid>aa8efef2-a4fa-46da-99ec-8e728046a9cf</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:9d:8a:68'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.255642    8315 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.255667    8315 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1120 20:21:05.255686    8315 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.255775    8315 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21923-3793/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1120 20:21:05.515325    8315 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa...
	I1120 20:21:05.718020    8315 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk...
	I1120 20:21:05.718065    8315 main.go:143] libmachine: Writing magic tar header
	I1120 20:21:05.718104    8315 main.go:143] libmachine: Writing SSH key tar header
	I1120 20:21:05.718203    8315 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.718284    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553
	I1120 20:21:05.718335    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 (perms=drwx------)
	I1120 20:21:05.718363    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines
	I1120 20:21:05.718383    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines (perms=drwxr-xr-x)
	I1120 20:21:05.718404    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.718421    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube (perms=drwxr-xr-x)
	I1120 20:21:05.718438    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793
	I1120 20:21:05.718456    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793 (perms=drwxrwxr-x)
	I1120 20:21:05.718473    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1120 20:21:05.718490    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1120 20:21:05.718505    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1120 20:21:05.718521    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1120 20:21:05.718536    8315 main.go:143] libmachine: checking permissions on dir: /home
	I1120 20:21:05.718549    8315 main.go:143] libmachine: skipping /home - not owner
	I1120 20:21:05.718557    8315 main.go:143] libmachine: defining domain...
	I1120 20:21:05.719886    8315 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:05.727760    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:79:1f:b5 in network default
	I1120 20:21:05.728410    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:05.728434    8315 main.go:143] libmachine: starting domain...
	I1120 20:21:05.728441    8315 main.go:143] libmachine: ensuring networks are active...
	I1120 20:21:05.729136    8315 main.go:143] libmachine: Ensuring network default is active
	I1120 20:21:05.729504    8315 main.go:143] libmachine: Ensuring network mk-addons-947553 is active
	I1120 20:21:05.730087    8315 main.go:143] libmachine: getting domain XML...
	I1120 20:21:05.731121    8315 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <uuid>2ab490c5-e4f0-46af-88ec-dee8117466b4</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:a7:2c'/>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:79:1f:b5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:07.012614    8315 main.go:143] libmachine: waiting for domain to start...
	I1120 20:21:07.013937    8315 main.go:143] libmachine: domain is now running
	I1120 20:21:07.013958    8315 main.go:143] libmachine: waiting for IP...
	I1120 20:21:07.014713    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.015361    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.015380    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.015661    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.015708    8315 retry.go:31] will retry after 270.684091ms: waiting for domain to come up
	I1120 20:21:07.288186    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.288839    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.288865    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.289198    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.289247    8315 retry.go:31] will retry after 384.258097ms: waiting for domain to come up
	I1120 20:21:07.674731    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.675347    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.675362    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.675602    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.675642    8315 retry.go:31] will retry after 325.268494ms: waiting for domain to come up
	I1120 20:21:08.002089    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.002712    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.002729    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.003011    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.003044    8315 retry.go:31] will retry after 532.953777ms: waiting for domain to come up
	I1120 20:21:08.537708    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.538539    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.538554    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.538839    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.538878    8315 retry.go:31] will retry after 671.32775ms: waiting for domain to come up
	I1120 20:21:09.212032    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.212741    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.212765    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.213102    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.213142    8315 retry.go:31] will retry after 640.716702ms: waiting for domain to come up
	I1120 20:21:09.855420    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.856063    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.856083    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.856391    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.856428    8315 retry.go:31] will retry after 715.495515ms: waiting for domain to come up
	I1120 20:21:10.573053    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:10.573668    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:10.573685    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:10.574006    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:10.574049    8315 retry.go:31] will retry after 1.386473849s: waiting for domain to come up
	I1120 20:21:11.962706    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:11.963438    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:11.963454    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:11.963745    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:11.963779    8315 retry.go:31] will retry after 1.671471747s: waiting for domain to come up
	I1120 20:21:13.637832    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:13.638601    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:13.638620    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:13.639009    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:13.639040    8315 retry.go:31] will retry after 1.524844768s: waiting for domain to come up
	I1120 20:21:15.165792    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:15.166517    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:15.166555    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:15.166908    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:15.166949    8315 retry.go:31] will retry after 2.171556586s: waiting for domain to come up
	I1120 20:21:17.341326    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:17.341989    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:17.342008    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:17.342371    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:17.342408    8315 retry.go:31] will retry after 2.613437366s: waiting for domain to come up
	I1120 20:21:19.957329    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:19.958097    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:19.958115    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:19.958466    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:19.958501    8315 retry.go:31] will retry after 4.105323605s: waiting for domain to come up
	I1120 20:21:24.068938    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069767    8315 main.go:143] libmachine: domain addons-947553 has current primary IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069790    8315 main.go:143] libmachine: found domain IP: 192.168.39.80
	I1120 20:21:24.069802    8315 main.go:143] libmachine: reserving static IP address...
	I1120 20:21:24.070350    8315 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-947553", mac: "52:54:00:7b:a7:2c", ip: "192.168.39.80"} in network mk-addons-947553
	I1120 20:21:24.251658    8315 main.go:143] libmachine: reserved static IP address 192.168.39.80 for domain addons-947553
	I1120 20:21:24.251676    8315 main.go:143] libmachine: waiting for SSH...
	I1120 20:21:24.251682    8315 main.go:143] libmachine: Getting to WaitForSSH function...
	I1120 20:21:24.254839    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255480    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.255507    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255698    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.255932    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.255946    8315 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1120 20:21:24.357511    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.357947    8315 main.go:143] libmachine: domain creation complete
	I1120 20:21:24.359373    8315 machine.go:94] provisionDockerMachine start ...
	I1120 20:21:24.361503    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.361927    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.361949    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.362121    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.362368    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.362381    8315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:21:24.462018    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1120 20:21:24.462045    8315 buildroot.go:166] provisioning hostname "addons-947553"
	I1120 20:21:24.464884    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465302    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.465327    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465556    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.465783    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.465796    8315 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-947553 && echo "addons-947553" | sudo tee /etc/hostname
	I1120 20:21:24.590591    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947553
	
	I1120 20:21:24.593332    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593716    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.593739    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593959    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.594201    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.594220    8315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-947553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-947553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-947553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:21:24.704349    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.704375    8315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 20:21:24.704425    8315 buildroot.go:174] setting up certificates
	I1120 20:21:24.704437    8315 provision.go:84] configureAuth start
	I1120 20:21:24.707018    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.707382    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.707405    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709518    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709819    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.709844    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709960    8315 provision.go:143] copyHostCerts
	I1120 20:21:24.710021    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 20:21:24.710131    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 20:21:24.710204    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 20:21:24.710279    8315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.addons-947553 san=[127.0.0.1 192.168.39.80 addons-947553 localhost minikube]
	I1120 20:21:24.868893    8315 provision.go:177] copyRemoteCerts
	I1120 20:21:24.868955    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:21:24.871421    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.871778    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.871813    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.872001    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:24.954555    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:21:24.986020    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:21:25.016669    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:21:25.046712    8315 provision.go:87] duration metric: took 342.262806ms to configureAuth
	I1120 20:21:25.046739    8315 buildroot.go:189] setting minikube options for container-runtime
	I1120 20:21:25.046974    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:25.049642    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050132    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.050155    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050331    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.050555    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.050571    8315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:21:25.295480    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:21:25.295505    8315 machine.go:97] duration metric: took 936.115627ms to provisionDockerMachine
	I1120 20:21:25.295517    8315 client.go:176] duration metric: took 20.443004703s to LocalClient.Create
	I1120 20:21:25.295530    8315 start.go:167] duration metric: took 20.443049547s to libmachine.API.Create "addons-947553"
	I1120 20:21:25.295539    8315 start.go:293] postStartSetup for "addons-947553" (driver="kvm2")
	I1120 20:21:25.295551    8315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:21:25.295599    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:21:25.298453    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.298889    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.298912    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.299118    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.380706    8315 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:21:25.386067    8315 info.go:137] Remote host: Buildroot 2025.02
	I1120 20:21:25.386096    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 20:21:25.386163    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 20:21:25.386186    8315 start.go:296] duration metric: took 90.641008ms for postStartSetup
	I1120 20:21:25.389037    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389412    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.389432    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389654    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:25.389819    8315 start.go:128] duration metric: took 20.539459484s to createHost
	I1120 20:21:25.392104    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392481    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.392504    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392693    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.392952    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.392965    8315 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 20:21:25.493567    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763670085.456620738
	
	I1120 20:21:25.493591    8315 fix.go:216] guest clock: 1763670085.456620738
	I1120 20:21:25.493598    8315 fix.go:229] Guest: 2025-11-20 20:21:25.456620738 +0000 UTC Remote: 2025-11-20 20:21:25.389830223 +0000 UTC m=+20.636741018 (delta=66.790515ms)
	I1120 20:21:25.493614    8315 fix.go:200] guest clock delta is within tolerance: 66.790515ms
	I1120 20:21:25.493618    8315 start.go:83] releasing machines lock for "addons-947553", held for 20.643324737s
	I1120 20:21:25.496394    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.496731    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.496750    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.497416    8315 ssh_runner.go:195] Run: cat /version.json
	I1120 20:21:25.497480    8315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:21:25.500666    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.500828    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501105    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501135    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501175    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501196    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501333    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.501488    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.605393    8315 ssh_runner.go:195] Run: systemctl --version
	I1120 20:21:25.612006    8315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:21:25.772800    8315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:21:25.780223    8315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:21:25.780282    8315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:21:25.801102    8315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:21:25.801129    8315 start.go:496] detecting cgroup driver to use...
	I1120 20:21:25.801204    8315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:21:25.821353    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:21:25.843177    8315 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:21:25.843231    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:21:25.868522    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:21:25.885911    8315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:21:26.035325    8315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:21:26.252665    8315 docker.go:234] disabling docker service ...
	I1120 20:21:26.252745    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:21:26.269964    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:21:26.285883    8315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:21:26.444730    8315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:21:26.588236    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:21:26.605731    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:21:26.631197    8315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:21:26.631278    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.644989    8315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 20:21:26.645074    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.659053    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.672870    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.687322    8315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:21:26.702284    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.716913    8315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.738871    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.752362    8315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:21:26.763831    8315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1120 20:21:26.763912    8315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1120 20:21:26.789002    8315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:21:26.803924    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:26.952317    8315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:21:27.200343    8315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:21:27.200435    8315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:21:27.206384    8315 start.go:564] Will wait 60s for crictl version
	I1120 20:21:27.206464    8315 ssh_runner.go:195] Run: which crictl
	I1120 20:21:27.211256    8315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 20:21:27.250686    8315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 20:21:27.250789    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.281244    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.453589    8315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1120 20:21:27.519790    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520199    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:27.520222    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520413    8315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1120 20:21:27.525676    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:27.542910    8315 kubeadm.go:884] updating cluster {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:21:27.543059    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:27.543129    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:27.574818    8315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 20:21:27.574926    8315 ssh_runner.go:195] Run: which lz4
	I1120 20:21:27.580276    8315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1120 20:21:27.587089    8315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1120 20:21:27.587120    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1120 20:21:29.151749    8315 crio.go:462] duration metric: took 1.571528535s to copy over tarball
	I1120 20:21:29.151825    8315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1120 20:21:30.840010    8315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.688159594s)
	I1120 20:21:30.840053    8315 crio.go:469] duration metric: took 1.688277204s to extract the tarball
	I1120 20:21:30.840061    8315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1120 20:21:30.882678    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:30.922657    8315 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:21:30.922680    8315 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:21:30.922687    8315 kubeadm.go:935] updating node { 192.168.39.80 8443 v1.34.1 crio true true} ...
	I1120 20:21:30.922783    8315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-947553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:21:30.922874    8315 ssh_runner.go:195] Run: crio config
	I1120 20:21:30.970750    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:30.970771    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:30.970787    8315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:21:30.970807    8315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-947553 NodeName:addons-947553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:21:30.970921    8315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-947553"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.80"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:21:30.970978    8315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:21:30.984115    8315 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:21:30.984179    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:21:30.997000    8315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1120 20:21:31.019490    8315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:21:31.040334    8315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1120 20:21:31.062447    8315 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I1120 20:21:31.066873    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:31.082252    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:31.225462    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:31.260197    8315 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553 for IP: 192.168.39.80
	I1120 20:21:31.260217    8315 certs.go:195] generating shared ca certs ...
	I1120 20:21:31.260232    8315 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.260386    8315 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 20:21:31.565609    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt ...
	I1120 20:21:31.565637    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt: {Name:mkbaf0e14aa61a2ff1b23e3cacd2c256e32e6647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565863    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key ...
	I1120 20:21:31.565878    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key: {Name:mk6aeca1c4b3f3e4ff969d4a1bc1fecc4b0fa343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565998    8315 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 20:21:32.272316    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt ...
	I1120 20:21:32.272345    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt: {Name:mk6e855dc2ded0db05a3455c6e851abbeb92043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272564    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key ...
	I1120 20:21:32.272590    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key: {Name:mkc4fdf928a4209309cd887410d07a4fb9cad8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272702    8315 certs.go:257] generating profile certs ...
	I1120 20:21:32.272778    8315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key
	I1120 20:21:32.272805    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt with IP's: []
	I1120 20:21:32.531299    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt ...
	I1120 20:21:32.531330    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: {Name:mkacef1d43c5fe9ffb1d09b61b8a2a7db2cf094d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531547    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key ...
	I1120 20:21:32.531568    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key: {Name:mk2cb4e6b2267fb750aa726a4e65ebdfb9212cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531675    8315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2
	I1120 20:21:32.531704    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80]
	I1120 20:21:32.818886    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 ...
	I1120 20:21:32.818915    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2: {Name:mk790b39b3d9776066f9b6fb58232a0c1fea8994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819086    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 ...
	I1120 20:21:32.819099    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2: {Name:mk4563c621ceba8c563d34ed8d2ea6985bc21d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819174    8315 certs.go:382] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt
	I1120 20:21:32.819257    8315 certs.go:386] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key
	I1120 20:21:32.819305    8315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key
	I1120 20:21:32.819322    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt with IP's: []
	I1120 20:21:33.229266    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt ...
	I1120 20:21:33.229303    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt: {Name:mk842c9b1c7d59553f9e9969540d37e3f124f603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229499    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key ...
	I1120 20:21:33.229519    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key: {Name:mk774bcb76c9d8c8959c52bd40c6db81e671bce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229746    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 20:21:33.229789    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:21:33.229825    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:21:33.229867    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 20:21:33.230425    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:21:33.262117    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:21:33.298274    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:21:33.335705    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:21:33.369053    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 20:21:33.401973    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:21:33.434941    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:21:33.467052    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:21:33.499463    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:21:33.533326    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:21:33.557271    8315 ssh_runner.go:195] Run: openssl version
	I1120 20:21:33.565199    8315 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.579252    8315 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:21:33.592359    8315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598287    8315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598357    8315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.606765    8315 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:21:33.620434    8315 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:21:33.633673    8315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:21:33.639557    8315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:21:33.639640    8315 kubeadm.go:401] StartCluster: {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:21:33.639719    8315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:21:33.639785    8315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:21:33.678141    8315 cri.go:89] found id: ""
	I1120 20:21:33.678230    8315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:21:33.692525    8315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:21:33.705815    8315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:21:33.718541    8315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:21:33.718560    8315 kubeadm.go:158] found existing configuration files:
	
	I1120 20:21:33.718602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:21:33.730012    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:21:33.730084    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:21:33.742602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:21:33.754750    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:21:33.754833    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:21:33.773694    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.789522    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:21:33.789573    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.803646    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:21:33.817663    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:21:33.817714    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:21:33.830895    8315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1120 20:21:34.010421    8315 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:21:45.965962    8315 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:21:45.966043    8315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:21:45.966134    8315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:21:45.966274    8315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:21:45.966402    8315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:21:45.966485    8315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:21:45.968313    8315 out.go:252]   - Generating certificates and keys ...
	I1120 20:21:45.968415    8315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:21:45.968512    8315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:21:45.968625    8315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:21:45.968701    8315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:21:45.968754    8315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:21:45.968819    8315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:21:45.968913    8315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:21:45.969101    8315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969192    8315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:21:45.969314    8315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969371    8315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:21:45.969421    8315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:21:45.969458    8315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:21:45.969504    8315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:21:45.969545    8315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:21:45.969595    8315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:21:45.969637    8315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:21:45.969697    8315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:21:45.969754    8315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:21:45.969823    8315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:21:45.969888    8315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:21:45.971245    8315 out.go:252]   - Booting up control plane ...
	I1120 20:21:45.971330    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:21:45.971396    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:21:45.971453    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:21:45.971554    8315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:21:45.971660    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:21:45.971754    8315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:21:45.971826    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:21:45.971880    8315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:21:45.972014    8315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:21:45.972174    8315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:21:45.972260    8315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.915384ms
	I1120 20:21:45.972339    8315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:21:45.972417    8315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.80:8443/livez
	I1120 20:21:45.972499    8315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:21:45.972565    8315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:21:45.972626    8315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.009474334s
	I1120 20:21:45.972680    8315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.600510793s
	I1120 20:21:45.972745    8315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502310178s
	I1120 20:21:45.972837    8315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:21:45.972964    8315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:21:45.973026    8315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:21:45.973213    8315 kubeadm.go:319] [mark-control-plane] Marking the node addons-947553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:21:45.973262    8315 kubeadm.go:319] [bootstrap-token] Using token: 2xpoj0.3iafwcplk6gzssxo
	I1120 20:21:45.975478    8315 out.go:252]   - Configuring RBAC rules ...
	I1120 20:21:45.975637    8315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:21:45.975749    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:21:45.975873    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:21:45.975991    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:21:45.976087    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:21:45.976159    8315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:21:45.976260    8315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:21:45.976297    8315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:21:45.976339    8315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:21:45.976345    8315 kubeadm.go:319] 
	I1120 20:21:45.976416    8315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:21:45.976432    8315 kubeadm.go:319] 
	I1120 20:21:45.976492    8315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:21:45.976498    8315 kubeadm.go:319] 
	I1120 20:21:45.976524    8315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:21:45.976573    8315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:21:45.976612    8315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:21:45.976618    8315 kubeadm.go:319] 
	I1120 20:21:45.976662    8315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:21:45.976669    8315 kubeadm.go:319] 
	I1120 20:21:45.976708    8315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:21:45.976716    8315 kubeadm.go:319] 
	I1120 20:21:45.976761    8315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:21:45.976832    8315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:21:45.976903    8315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:21:45.976909    8315 kubeadm.go:319] 
	I1120 20:21:45.976975    8315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:21:45.977039    8315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:21:45.977046    8315 kubeadm.go:319] 
	I1120 20:21:45.977115    8315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977197    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 \
	I1120 20:21:45.977222    8315 kubeadm.go:319] 	--control-plane 
	I1120 20:21:45.977228    8315 kubeadm.go:319] 
	I1120 20:21:45.977318    8315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:21:45.977332    8315 kubeadm.go:319] 
	I1120 20:21:45.977426    8315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977559    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 
	I1120 20:21:45.977570    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:45.977577    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:45.978905    8315 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1120 20:21:45.980206    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1120 20:21:45.998278    8315 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1120 20:21:46.024557    8315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:21:46.024640    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.024705    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-947553 minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-947553 minikube.k8s.io/primary=true
	I1120 20:21:46.163608    8315 ops.go:34] apiserver oom_adj: -16
	I1120 20:21:46.163786    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.664084    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.164553    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.664473    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.164635    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.664221    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.163942    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.663901    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.164591    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.290234    8315 kubeadm.go:1114] duration metric: took 4.265649758s to wait for elevateKubeSystemPrivileges
	I1120 20:21:50.290282    8315 kubeadm.go:403] duration metric: took 16.650648707s to StartCluster
	I1120 20:21:50.290306    8315 settings.go:142] acquiring lock: {Name:mke92973c8f33ef32fe11f7b266adf74cd3ec47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.290453    8315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:50.290990    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.291268    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:21:50.291283    8315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:50.291344    8315 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 20:21:50.291469    8315 addons.go:70] Setting gcp-auth=true in profile "addons-947553"
	I1120 20:21:50.291484    8315 addons.go:70] Setting ingress=true in profile "addons-947553"
	I1120 20:21:50.291498    8315 mustload.go:66] Loading cluster: addons-947553
	I1120 20:21:50.291500    8315 addons.go:239] Setting addon ingress=true in "addons-947553"
	I1120 20:21:50.291494    8315 addons.go:70] Setting cloud-spanner=true in profile "addons-947553"
	I1120 20:21:50.291519    8315 addons.go:239] Setting addon cloud-spanner=true in "addons-947553"
	I1120 20:21:50.291525    8315 addons.go:70] Setting registry=true in profile "addons-947553"
	I1120 20:21:50.291542    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291555    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291554    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291565    8315 addons.go:239] Setting addon registry=true in "addons-947553"
	I1120 20:21:50.291594    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291595    8315 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.291607    8315 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-947553"
	I1120 20:21:50.291627    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291692    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291474    8315 addons.go:70] Setting yakd=true in profile "addons-947553"
	I1120 20:21:50.292160    8315 addons.go:239] Setting addon yakd=true in "addons-947553"
	I1120 20:21:50.292192    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292250    8315 addons.go:70] Setting inspektor-gadget=true in profile "addons-947553"
	I1120 20:21:50.292272    8315 addons.go:239] Setting addon inspektor-gadget=true in "addons-947553"
	I1120 20:21:50.292297    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292485    8315 addons.go:70] Setting ingress-dns=true in profile "addons-947553"
	I1120 20:21:50.292520    8315 addons.go:239] Setting addon ingress-dns=true in "addons-947553"
	I1120 20:21:50.292545    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292621    8315 addons.go:70] Setting registry-creds=true in profile "addons-947553"
	I1120 20:21:50.292644    8315 addons.go:239] Setting addon registry-creds=true in "addons-947553"
	I1120 20:21:50.292671    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292677    8315 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-947553"
	I1120 20:21:50.292719    8315 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:21:50.292755    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292807    8315 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-947553"
	I1120 20:21:50.292829    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-947553"
	I1120 20:21:50.292880    8315 addons.go:70] Setting metrics-server=true in profile "addons-947553"
	I1120 20:21:50.292897    8315 addons.go:239] Setting addon metrics-server=true in "addons-947553"
	I1120 20:21:50.292922    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293069    8315 out.go:179] * Verifying Kubernetes components...
	I1120 20:21:50.293281    8315 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.293300    8315 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-947553"
	I1120 20:21:50.293321    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293536    8315 addons.go:70] Setting default-storageclass=true in profile "addons-947553"
	I1120 20:21:50.293556    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-947553"
	I1120 20:21:50.293573    8315 addons.go:70] Setting storage-provisioner=true in profile "addons-947553"
	I1120 20:21:50.293591    8315 addons.go:239] Setting addon storage-provisioner=true in "addons-947553"
	I1120 20:21:50.293613    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293979    8315 addons.go:70] Setting volcano=true in profile "addons-947553"
	I1120 20:21:50.294002    8315 addons.go:239] Setting addon volcano=true in "addons-947553"
	I1120 20:21:50.294026    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294103    8315 addons.go:70] Setting volumesnapshots=true in profile "addons-947553"
	I1120 20:21:50.294122    8315 addons.go:239] Setting addon volumesnapshots=true in "addons-947553"
	I1120 20:21:50.294146    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294465    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:50.297973    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.299952    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 20:21:50.299964    8315 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 20:21:50.300060    8315 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 20:21:50.300093    8315 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 20:21:50.299977    8315 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 20:21:50.301985    8315 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-947553"
	I1120 20:21:50.302030    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.302603    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 20:21:50.303185    8315 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 20:21:50.302631    8315 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:50.303261    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	W1120 20:21:50.302916    8315 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 20:21:50.303040    8315 addons.go:239] Setting addon default-storageclass=true in "addons-947553"
	I1120 20:21:50.303355    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.303953    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 20:21:50.303969    8315 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 20:21:50.303973    8315 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 20:21:50.303953    8315 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 20:21:50.304024    8315 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:50.305543    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 20:21:50.304040    8315 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 20:21:50.304099    8315 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 20:21:50.305800    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 20:21:50.304918    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.304913    8315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:21:50.305899    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:50.307319    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 20:21:50.306014    8315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:50.307351    8315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:21:50.307429    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.307470    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 20:21:50.307480    8315 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1120 20:21:50.306784    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:21:50.307511    8315 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:21:50.306817    8315 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 20:21:50.307620    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.306822    8315 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:50.307695    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 20:21:50.307706    8315 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:50.307716    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 20:21:50.306909    8315 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:50.308092    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 20:21:50.308474    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 20:21:50.308512    8315 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 20:21:50.308524    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 20:21:50.308827    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.308882    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309172    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.309208    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309325    8315 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 20:21:50.309319    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.309343    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:50.309353    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:21:50.309929    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.310172    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.311742    8315 out.go:179]   - Using image docker.io/busybox:stable
	I1120 20:21:50.311746    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 20:21:50.311894    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:50.311914    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 20:21:50.313106    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:50.313128    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 20:21:50.314097    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.314587    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 20:21:50.315478    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.315516    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.316257    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.316610    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 20:21:50.317131    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.317791    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318124    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318489    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.318521    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318877    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.319057    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319200    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319245    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 20:21:50.319767    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319780    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319803    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319808    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320039    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320130    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320260    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320721    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.320726    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321176    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321210    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321308    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321337    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321371    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321267    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321416    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321437    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321401    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321692    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321834    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 20:21:50.321903    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321928    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321951    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322097    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322416    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322441    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322690    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322712    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.322755    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323004    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323171    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.323197    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323359    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324196    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.324226    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324375    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.324536    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 20:21:50.325593    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 20:21:50.325607    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 20:21:50.328078    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328534    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.328557    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328735    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	W1120 20:21:50.476524    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.476558    8315 retry.go:31] will retry after 236.913044ms: ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513415    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513438    8315 retry.go:31] will retry after 367.013463ms: ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513646    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513672    8315 retry.go:31] will retry after 332.960576ms: ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.932554    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:50.932720    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:21:51.133049    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:51.144339    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:51.194458    8315 node_ready.go:35] waiting up to 6m0s for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206010    8315 node_ready.go:49] node "addons-947553" is "Ready"
	I1120 20:21:51.206043    8315 node_ready.go:38] duration metric: took 11.547378ms for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206057    8315 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:21:51.206112    8315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:21:51.317342    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:51.364561    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:51.396520    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 20:21:51.396550    8315 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 20:21:51.401286    8315 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 20:21:51.401312    8315 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 20:21:51.407832    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:51.408939    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:51.438765    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:51.452371    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:51.487541    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:21:51.487567    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 20:21:51.667634    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:51.705278    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 20:21:51.705307    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 20:21:52.073299    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 20:21:52.073332    8315 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 20:21:52.156840    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 20:21:52.156890    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 20:21:52.182216    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 20:21:52.182260    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 20:21:52.289345    8315 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.289373    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 20:21:52.358156    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:21:52.358186    8315 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:21:52.524224    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:52.790466    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 20:21:52.790495    8315 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 20:21:52.867899    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 20:21:52.867926    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 20:21:52.911549    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.970452    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 20:21:52.970488    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 20:21:53.004660    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.004687    8315 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:21:53.165475    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.165505    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 20:21:53.292981    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 20:21:53.293014    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 20:21:53.388236    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 20:21:53.388266    8315 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 20:21:53.476188    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.678912    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.790164    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 20:21:53.790192    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 20:21:53.898000    8315 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:53.898021    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 20:21:54.089534    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 20:21:54.089570    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 20:21:54.326111    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:54.418621    8315 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.485861131s)
	I1120 20:21:54.418657    8315 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1120 20:21:54.662053    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 20:21:54.662081    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 20:21:54.924608    8315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-947553" context rescaled to 1 replicas
	I1120 20:21:55.256603    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 20:21:55.256640    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 20:21:55.513213    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.380124251s)
	I1120 20:21:55.513226    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.368859446s)
	I1120 20:21:55.513320    8315 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.307185785s)
	I1120 20:21:55.513363    8315 api_server.go:72] duration metric: took 5.222046626s to wait for apiserver process to appear ...
	I1120 20:21:55.513378    8315 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:21:55.513400    8315 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I1120 20:21:55.523525    8315 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I1120 20:21:55.528356    8315 api_server.go:141] control plane version: v1.34.1
	I1120 20:21:55.528379    8315 api_server.go:131] duration metric: took 14.994765ms to wait for apiserver health ...
	I1120 20:21:55.528386    8315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:21:55.548383    8315 system_pods.go:59] 10 kube-system pods found
	I1120 20:21:55.548433    8315 system_pods.go:61] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.548445    8315 system_pods.go:61] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548456    8315 system_pods.go:61] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548466    8315 system_pods.go:61] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.548475    8315 system_pods.go:61] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.548481    8315 system_pods.go:61] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.548491    8315 system_pods.go:61] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.548496    8315 system_pods.go:61] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.548506    8315 system_pods.go:61] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.548517    8315 system_pods.go:61] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.548528    8315 system_pods.go:74] duration metric: took 20.135717ms to wait for pod list to return data ...
	I1120 20:21:55.548544    8315 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:21:55.562077    8315 default_sa.go:45] found service account: "default"
	I1120 20:21:55.562106    8315 default_sa.go:55] duration metric: took 13.552829ms for default service account to be created ...
	I1120 20:21:55.562116    8315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:21:55.573516    8315 system_pods.go:86] 10 kube-system pods found
	I1120 20:21:55.573548    8315 system_pods.go:89] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.573556    8315 system_pods.go:89] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573563    8315 system_pods.go:89] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573568    8315 system_pods.go:89] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.573572    8315 system_pods.go:89] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.573584    8315 system_pods.go:89] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.573588    8315 system_pods.go:89] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.573591    8315 system_pods.go:89] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.573595    8315 system_pods.go:89] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.573610    8315 system_pods.go:89] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.573619    8315 system_pods.go:126] duration metric: took 11.497162ms to wait for k8s-apps to be running ...
	I1120 20:21:55.573629    8315 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:21:55.573680    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:21:55.821435    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 20:21:55.821456    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 20:21:56.372153    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 20:21:56.372176    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 20:21:57.167628    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.167657    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 20:21:57.654485    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.724650    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 20:21:57.727763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728228    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:57.728257    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728455    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:57.738040    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.420656069s)
	I1120 20:21:57.738102    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.373508925s)
	I1120 20:21:58.308598    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 20:21:58.564754    8315 addons.go:239] Setting addon gcp-auth=true in "addons-947553"
	I1120 20:21:58.564806    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:58.566499    8315 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 20:21:58.568681    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569089    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:58.569115    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569249    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:58.833314    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.424339116s)
	I1120 20:21:58.833336    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.425455784s)
	I1120 20:21:58.833402    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.394606542s)
	I1120 20:22:00.317183    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.864775691s)
	I1120 20:22:00.317236    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.649563834s)
	I1120 20:22:00.317246    8315 addons.go:480] Verifying addon ingress=true in "addons-947553"
	I1120 20:22:00.317313    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.793066584s)
	I1120 20:22:00.317374    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.405778801s)
	I1120 20:22:00.317401    8315 addons.go:480] Verifying addon registry=true in "addons-947553"
	I1120 20:22:00.317473    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.841250467s)
	I1120 20:22:00.317500    8315 addons.go:480] Verifying addon metrics-server=true in "addons-947553"
	I1120 20:22:00.317549    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.638598976s)
	I1120 20:22:00.318753    8315 out.go:179] * Verifying ingress addon...
	I1120 20:22:00.319477    8315 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-947553 service yakd-dashboard -n yakd-dashboard
	
	I1120 20:22:00.319499    8315 out.go:179] * Verifying registry addon...
	I1120 20:22:00.321062    8315 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 20:22:00.321882    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 20:22:00.330255    8315 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:22:00.330274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:00.330580    8315 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:22:00.330602    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.843037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.862027    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.136755    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.810594192s)
	I1120 20:22:01.136799    8315 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.563097568s)
	W1120 20:22:01.136810    8315 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136824    8315 system_svc.go:56] duration metric: took 5.563190734s WaitForService to wait for kubelet
	I1120 20:22:01.136838    8315 retry.go:31] will retry after 297.745206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136835    8315 kubeadm.go:587] duration metric: took 10.845518493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:22:01.136866    8315 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:22:01.169336    8315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1120 20:22:01.169377    8315 node_conditions.go:123] node cpu capacity is 2
	I1120 20:22:01.169391    8315 node_conditions.go:105] duration metric: took 32.519256ms to run NodePressure ...
	I1120 20:22:01.169403    8315 start.go:242] waiting for startup goroutines ...
	I1120 20:22:01.357701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:01.358795    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.434928    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:22:01.868679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.868782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.346294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.352833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.862753    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.890512    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.996195    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.34165692s)
	I1120 20:22:02.996225    8315 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.429699726s)
	I1120 20:22:02.996254    8315 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:22:02.997930    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:22:02.997950    8315 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 20:22:02.999363    8315 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 20:22:02.999980    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 20:22:03.000816    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 20:22:03.000833    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 20:22:03.047631    8315 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:22:03.047661    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.095774    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 20:22:03.095800    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 20:22:03.172675    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.172696    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 20:22:03.258447    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.328725    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.328999    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:03.506980    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.835051    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.838342    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.009598    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.059484    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.624514335s)
	I1120 20:22:04.342509    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.346146    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:04.552392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.655990    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.397510493s)
	I1120 20:22:04.657251    8315 addons.go:480] Verifying addon gcp-auth=true in "addons-947553"
	I1120 20:22:04.658765    8315 out.go:179] * Verifying gcp-auth addon...
	I1120 20:22:04.660962    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 20:22:04.689345    8315 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 20:22:04.689379    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:04.830184    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.831805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.008119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.171353    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.336728    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.336869    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.517754    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.671439    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.828977    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.832656    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.008324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.167007    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:06.327339    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.505702    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.665077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.831323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.832004    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.005311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.170575    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.326420    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.330401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:07.504324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.665313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.827482    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.830140    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.005717    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.168657    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.325483    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.326808    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:08.508047    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.664546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.828313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.829419    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.004761    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.165417    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.325923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.327133    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.503806    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.665158    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.827304    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.828458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.005165    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.164419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.328020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.328899    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.503540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.665211    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.827565    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.828293    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.007088    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.172637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.329792    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.330515    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:11.506127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.666152    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.832352    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.832833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.009397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.164503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.324601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:12.330001    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.557333    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.690799    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.826246    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.827168    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.004570    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.166124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.330939    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.334724    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.505747    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.664947    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.826640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.827501    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.005488    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.172285    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.325676    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.327874    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:14.505478    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.665377    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.828164    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.828324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.004108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.165356    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.332218    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.345244    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.505401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.665824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.827117    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.827311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.006364    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.177517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.340592    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.341189    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:16.504797    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.664830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.830245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.830443    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.005532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.167264    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.330014    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.331394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:17.559675    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.678477    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.826495    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.832794    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.005502    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.166351    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.327573    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.327734    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:18.503894    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.666269    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.830279    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.832316    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.005728    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.166452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.327371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.329317    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.506362    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.670606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.831060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.832764    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.004618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.166635    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.327601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.327638    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.504392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.665742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.827471    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.829616    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.004605    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.169921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.333272    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.336011    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.504542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.665682    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.825419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.828055    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.004227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.164229    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.326927    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.332370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:22.505033    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.666978    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.834204    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.836963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.168623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.430297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.433691    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.508735    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.667674    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.836886    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.837245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.005900    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.169110    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.326634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.327904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.673297    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.673506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.830570    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.831631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.009064    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.164922    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.325762    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.327935    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.667618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.827414    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.828623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.005073    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.167711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.326679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.327247    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.505503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.665655    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.825436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.828500    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.005840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.167830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.328527    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.328746    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.506666    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.666716    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.832531    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.833632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.006766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.165323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.327708    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.328341    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.506036    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.666241    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.944433    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.944810    8315 kapi.go:107] duration metric: took 28.622926025s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 20:22:29.006863    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.167687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.328145    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:29.504218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.664460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.827372    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.004445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.164822    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.324811    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.504410    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.665044    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.825337    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.004318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.164385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.325406    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.505029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.665134    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.825650    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.004127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.166139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.324701    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.504614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.664944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.825143    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.004577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.165685    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.325974    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.704460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.708873    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.825075    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.004596    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.165867    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.325611    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.504800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.665454    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.825871    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.004177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.164697    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.326110    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.503481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.664737    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.826308    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.004218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.165000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.324326    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.503689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.666782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.005202    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.164053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.325572    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.505330    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.664284    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.825262    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.004289    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.164481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.326051    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.503226    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.664232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.824502    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.004487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.164963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.325878    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.505209    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.664636    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.825100    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.003777    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.165642    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.504393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.664821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.824897    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.004355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.164546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.326024    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.504280    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.664217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.825780    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.005113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.164701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.325297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.504448    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.665577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.824743    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.004833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.165891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.326070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.503696    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.664800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.826756    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.005306    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.164704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.325455    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.505302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.664815    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.824692    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.003742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.164950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.325614    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.664827    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.826405    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.003951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.165370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.325730    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.505387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.664689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.825033    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.004484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.165449    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.325798    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.504952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.665632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.825364    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.003790    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.165543    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.324818    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.504519    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.664630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.003721    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.164517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.326505    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.504416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.664711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.825942    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.004200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.164578    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.325328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.503484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.665421    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.004287    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.164268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.325315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.504380    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.665173    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.825228    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.004294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.165271    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.325922    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.504540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.664739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.825458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.165838    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.325362    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.503610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.664870    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.827535    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.004328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.164077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.324281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.504388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.665303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.825120    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.004586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.164561    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.325150    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.504219    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.664405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.826068    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.004103    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.164821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.325311    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.504506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.664957    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.825313    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.004010    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.164442    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.325029    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.504374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.825231    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.005792    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.325160    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.504029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.663903    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.825149    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.005092    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.164148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.324606    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.506476    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.664372    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.825198    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.005082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.164250    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.326383    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.503808    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.665909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.825874    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.004396    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.164829    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.326451    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.504153    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.664393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.825331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.004168    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.165403    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.325338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.504355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.664961    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.826305    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.003577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.165374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.325222    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.503643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.665037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.824710    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.004671    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.166844    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.325995    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.503907    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.665203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.825349    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.003990    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.163740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.325833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.504450    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.665053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.824804    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.005371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.164513    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.324904    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.504771    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.665389    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.825137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.003665    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.165006    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.325121    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.504075    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.665109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.824752    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.005627    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.165094    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.325074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.504510    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.665363    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.004201    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.165446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.328697    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.504259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.664453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.005404    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.164687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.325987    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.504122    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.664875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.826159    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.003419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.164744    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.325475    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.504220    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.004170    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.164955    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.325525    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.503631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.665991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.825430    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.003813    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.165098    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.325081    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.505315    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.665028    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.824542    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.005048    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.164487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.325020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.505722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.665177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.824929    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.004788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.165203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.324423    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.504085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.664347    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.825592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.007081    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.164221    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.664640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.825185    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.004104    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.165054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.325282    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.503452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.665265    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.824735    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.004695    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.164715    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.325314    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.503892    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.666272    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.824679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.004416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.164791    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.326105    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.504065    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.664586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.825391    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.004785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.164970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.325404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.503939    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.665093    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.824880    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.004871    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.165473    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.325426    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.505660    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.664949    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.825911    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.006475    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.164603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.504419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.664842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.825338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.003647    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.165240    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.326436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.506070    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.664446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.824867    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.005086    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.163951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.325452    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.504677    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.665161    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.826375    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.004842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.164847    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.325155    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.504019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.665239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.824773    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.005740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.165126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.324566    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.504021    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.665217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.825011    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.003550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.165239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.325538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.503904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.664722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.825083    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.004187    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.166259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.504236    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.664582    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.825165    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.003447    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.164432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.504121    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.664009    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.825082    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.004052    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.165479    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.328054    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.504976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.667464    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.824784    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.004256    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.166254    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.329074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.668785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.834378    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.012921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.182382    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.328273    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.512432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.668839    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.828146    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.010373    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.171918    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.327438    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.508687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.668358    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.825953    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.005514    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.169126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.328834    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.508779    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.665012    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.828137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.004394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.166928    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.325934    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.505139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.664302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.826453    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.009232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.164433    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.326221    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.503774    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.668019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.828315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.003923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.171231    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.329115    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.504101    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.665063    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.827549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.008085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.165142    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.325522    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.504378    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.664419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.826131    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.003818    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.169232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.324564    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.504485    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.668374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.828255    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.006466    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.166014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.327358    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.510974    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.670391    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.826816    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.005686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.164891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.328274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.503673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.665805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.825384    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.007673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.164828    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.329991    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.507109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.666970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.827404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.006050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.165530    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.336903    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.508108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.665050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.828179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.004826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.168465    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.327802    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.588926    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.686035    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.836096    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.013912    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.170060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.330109    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.506461    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.666266    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.833355    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.012759    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.165788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.331536    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.544743    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.668681    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.826281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.004579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.164501    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.325301    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.510314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.664541    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.825733    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.005390    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.164631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.325040    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.503952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.666328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.824449    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.004387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.165135    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.504929    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.665257    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.825179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.004248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.164504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.326488    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.504139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.665131    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.825464    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.004233    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.324723    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.505340    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.665910    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.824647    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.004550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.324772    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.504303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.667291    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.825223    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.004148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.164388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.325070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.503625    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.665901    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.826412    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.003441    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.164614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.325319    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.505054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.665324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.825610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.004621    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.165405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.326233    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.503470    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.665016    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.825575    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.004511    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.165472    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.325694    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.504017    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.663700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.825810    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.004323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.165204    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.504535    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.664639    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.825026    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.003739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.165764    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.325045    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.503360    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.664840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.826605    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.003999    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.165275    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.325421    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.504637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.665014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.824766    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.005128    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.164263    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.325333    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.504062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.664931    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.826290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.004640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.164832    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.325901    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.505129    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.664227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.824719    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.004950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.165053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.325360    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.505959    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.664868    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.826277    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.004096    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.164445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.324757    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.505252    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.665119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.824454    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.004909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.165591    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.325118    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.507564    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.664700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.826799    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.005349    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.165155    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.324582    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.504443    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.665778    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.825741    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.004414    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.164474    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.326066    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.503776    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.664979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.826056    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.003318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.164124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.324310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.503413    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.664606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.824831    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.004542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.165571    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.325290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.503944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.666366    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.825256    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.003826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.165200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.324763    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.505835    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.665113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.824632    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.004172    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.164462    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.324992    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.503686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.664930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.825754    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.004000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.163782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.325549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.504780    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.665314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.825684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.004180    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.164082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.324141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.504612    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.664748    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.825910    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.004630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.325684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.504463    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.664189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.824224    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.004212    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.165015    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.324331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.507504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.664678    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.826028    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.004824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.165312    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.325310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.503525    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.664637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.825538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.005397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.165397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.324350    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.504613    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.665640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.825950    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.004189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.167663    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.326720    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.508041    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.665546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.828365    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.004058    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.165184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.325634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.504817    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.668489    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.828972    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.005704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.167268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.334698    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.507751    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.667328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.831249    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.005669    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.167145    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.328610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.504643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.666213    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.830891    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.006991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.167023    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.326125    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.512788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.665384    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.829776    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.003972    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.170397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.324898    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.505825    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.665603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.827634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.007579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.168453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.327180    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.503837    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.665184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.824592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.005482    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.164766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.330141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.504539    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.667427    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.835328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.139729    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.240898    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.326048    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.505595    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.670610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.827986    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.007659    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.164981    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.331893    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.505078    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.665057    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.824303    8315 kapi.go:107] duration metric: took 2m26.503242857s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 20:24:27.004029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.164962    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:27.504834    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.668267    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.007248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.166983    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.507055    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.666163    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.005997    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.328979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.505976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.669956    8315 kapi.go:107] duration metric: took 2m25.008991629s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 20:24:29.672108    8315 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-947553 cluster.
	I1120 20:24:29.673437    8315 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 20:24:29.674752    8315 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 20:24:30.011875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:30.506718    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.005946    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.508062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.004768    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.513385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.006643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.504200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:34.004984    8315 kapi.go:107] duration metric: took 2m31.004999967s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 20:24:34.006745    8315 out.go:179] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1120 20:24:34.007905    8315 addons.go:515] duration metric: took 2m43.716565511s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1120 20:24:34.007942    8315 start.go:247] waiting for cluster config update ...
	I1120 20:24:34.007968    8315 start.go:256] writing updated cluster config ...
	I1120 20:24:34.008267    8315 ssh_runner.go:195] Run: rm -f paused
	I1120 20:24:34.016789    8315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:34.020696    8315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.026522    8315 pod_ready.go:94] pod "coredns-66bc5c9577-tpfkd" is "Ready"
	I1120 20:24:34.026545    8315 pod_ready.go:86] duration metric: took 5.821939ms for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.029616    8315 pod_ready.go:83] waiting for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.035420    8315 pod_ready.go:94] pod "etcd-addons-947553" is "Ready"
	I1120 20:24:34.035447    8315 pod_ready.go:86] duration metric: took 5.807107ms for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.038012    8315 pod_ready.go:83] waiting for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.042359    8315 pod_ready.go:94] pod "kube-apiserver-addons-947553" is "Ready"
	I1120 20:24:34.042389    8315 pod_ready.go:86] duration metric: took 4.353428ms for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.045156    8315 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.421067    8315 pod_ready.go:94] pod "kube-controller-manager-addons-947553" is "Ready"
	I1120 20:24:34.421095    8315 pod_ready.go:86] duration metric: took 375.9154ms for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.622667    8315 pod_ready.go:83] waiting for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.021658    8315 pod_ready.go:94] pod "kube-proxy-92nmr" is "Ready"
	I1120 20:24:35.021685    8315 pod_ready.go:86] duration metric: took 398.990446ms for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.222270    8315 pod_ready.go:83] waiting for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621176    8315 pod_ready.go:94] pod "kube-scheduler-addons-947553" is "Ready"
	I1120 20:24:35.621208    8315 pod_ready.go:86] duration metric: took 398.900241ms for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621225    8315 pod_ready.go:40] duration metric: took 1.604402122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:35.668514    8315 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:24:35.670410    8315 out.go:179] * Done! kubectl is now configured to use "addons-947553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.079064224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4aa63a8e-5516-4dbc-9c4f-e62793d2117d name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.079182843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4aa63a8e-5516-4dbc-9c4f-e62793d2117d name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.080451770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be53d649027f9e0e02bf97581914af43a58a7a36a5b8437bd99ca04785d0d7f3,PodSandboxId:0917839be797e27d813c026d318aed7a0c2224a292af2a543bc36d80ec3955e9,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1763670363813097738,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29c05b4e-3fc4-459e-ba3d-d0e3414ca257,},Annotations:map[string]string{io.kubernetes.container.hash: b38ca3e1,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a99aebb18d899ffa6c9914bac02fb3d1ace8033478f0364aba37d026287f48,PodSandboxId:9bb14c55ef2edb57e23effce25b7be7728f18cf5a2666f3d53634d38a2c641a8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1763670361644066094,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-jclw2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7a09cdd9-c227-4427-a2b9-b5f32de97ab7,},Annotations:map[string]string{io.kubernetes.container.hash: b7102817,io.kubernetes.container.ports: [{\"name\":\"h
ttp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.k
ubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,
},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6
hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fda
d87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&
ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5
b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandb
oxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0
aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.h
ash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ef245229b64797e035804e7b0dbd9ee9a637284585046000393a3a2dfff5171,PodSandboxId:932d6b20747f9ce5e768aed9a3d8ea43972e2f7e9c55ac1588d6ecd4127d2e72,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:655d0b5064a21e59e5336558c38ad36198be12f5c2e23abcd18f192966e3d15c,State:CONTAINER_RUNNING,CreatedAt:1763670216204913097,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-dnx8n,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 144048d7-70cb-4183-850c-037db831f39a,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 1deece16,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: lo
cal-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98895efa4aa643c420478e5c8f298c3bb1fb8e2811b889e362bd9cb5f8101ef4,PodSandboxId:34222e083f1fb9ea60fc3218f0c4599267b6a9aede813a75d1602dcf68caa60d,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1763670148473951731,Labels:map[string]string{io.kubernetes.conta
iner.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-f74ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735676cf-f787-4c40-aea2-353fd6d6c050,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa95
92ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8457eb0e6f4cb4bc79a262d7ecc236575641cb4bb48418b28387cfed69fc606e,PodSandboxId:1f395473e6361691a7ae7d431f265661b7b39f4cf928d6afda09341193b06caf,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a6284319
7e367427efb84d0e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4e5706768198b632e90feae7e51918ffac8898936ee9c3bbcf036f84c8f5ba1,State:CONTAINER_RUNNING,CreatedAt:1763670136172142722,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-6b586f9694-c76hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8341ed8b-de18-404d-9892-7e44cbdd07e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5328bc,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Im
ageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Im
age:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attem
pt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:
map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4aa63a8e-5516-4dbc-9c4f-e62793d2117d name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.123606144Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20dab673-8d4f-4106-81a0-811480a24f5c name=/runtime.v1.RuntimeService/Version
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.123683862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20dab673-8d4f-4106-81a0-811480a24f5c name=/runtime.v1.RuntimeService/Version
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.125133501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29df7d14-4470-45e9-93cb-e68ab66035b9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.126332536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670365126305314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29df7d14-4470-45e9-93cb-e68ab66035b9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.127922128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d6c9a53-d70a-4b90-bbdd-6fbe55eb6b73 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.127992831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d6c9a53-d70a-4b90-bbdd-6fbe55eb6b73 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.129009588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be53d649027f9e0e02bf97581914af43a58a7a36a5b8437bd99ca04785d0d7f3,PodSandboxId:0917839be797e27d813c026d318aed7a0c2224a292af2a543bc36d80ec3955e9,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1763670363813097738,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29c05b4e-3fc4-459e-ba3d-d0e3414ca257,},Annotations:map[string]string{io.kubernetes.container.hash: b38ca3e1,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a99aebb18d899ffa6c9914bac02fb3d1ace8033478f0364aba37d026287f48,PodSandboxId:9bb14c55ef2edb57e23effce25b7be7728f18cf5a2666f3d53634d38a2c641a8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1763670361644066094,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-jclw2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7a09cdd9-c227-4427-a2b9-b5f32de97ab7,},Annotations:map[string]string{io.kubernetes.container.hash: b7102817,io.kubernetes.container.ports: [{\"name\":\"h
ttp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.k
ubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,
},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6
hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fda
d87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&
ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5
b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandb
oxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0
aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.h
ash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ef245229b64797e035804e7b0dbd9ee9a637284585046000393a3a2dfff5171,PodSandboxId:932d6b20747f9ce5e768aed9a3d8ea43972e2f7e9c55ac1588d6ecd4127d2e72,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:655d0b5064a21e59e5336558c38ad36198be12f5c2e23abcd18f192966e3d15c,State:CONTAINER_RUNNING,CreatedAt:1763670216204913097,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-dnx8n,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 144048d7-70cb-4183-850c-037db831f39a,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 1deece16,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: lo
cal-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98895efa4aa643c420478e5c8f298c3bb1fb8e2811b889e362bd9cb5f8101ef4,PodSandboxId:34222e083f1fb9ea60fc3218f0c4599267b6a9aede813a75d1602dcf68caa60d,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1763670148473951731,Labels:map[string]string{io.kubernetes.conta
iner.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-f74ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735676cf-f787-4c40-aea2-353fd6d6c050,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa95
92ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8457eb0e6f4cb4bc79a262d7ecc236575641cb4bb48418b28387cfed69fc606e,PodSandboxId:1f395473e6361691a7ae7d431f265661b7b39f4cf928d6afda09341193b06caf,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a6284319
7e367427efb84d0e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4e5706768198b632e90feae7e51918ffac8898936ee9c3bbcf036f84c8f5ba1,State:CONTAINER_RUNNING,CreatedAt:1763670136172142722,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-6b586f9694-c76hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8341ed8b-de18-404d-9892-7e44cbdd07e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5328bc,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Im
ageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Im
age:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attem
pt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:
map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d6c9a53-d70a-4b90-bbdd-6fbe55eb6b73 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.170277487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b766bd10-54a9-41de-9877-7b7a8db10788 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.170648288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b766bd10-54a9-41de-9877-7b7a8db10788 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.173101316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02379ccc-169b-4e4e-98ec-0aaf08734c7f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.174951764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670365174924696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02379ccc-169b-4e4e-98ec-0aaf08734c7f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.175864441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b40e556e-b9ce-4552-aaff-7e5f34b77975 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.175945477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b40e556e-b9ce-4552-aaff-7e5f34b77975 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.176645767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be53d649027f9e0e02bf97581914af43a58a7a36a5b8437bd99ca04785d0d7f3,PodSandboxId:0917839be797e27d813c026d318aed7a0c2224a292af2a543bc36d80ec3955e9,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1763670363813097738,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29c05b4e-3fc4-459e-ba3d-d0e3414ca257,},Annotations:map[string]string{io.kubernetes.container.hash: b38ca3e1,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a99aebb18d899ffa6c9914bac02fb3d1ace8033478f0364aba37d026287f48,PodSandboxId:9bb14c55ef2edb57e23effce25b7be7728f18cf5a2666f3d53634d38a2c641a8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1763670361644066094,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-jclw2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7a09cdd9-c227-4427-a2b9-b5f32de97ab7,},Annotations:map[string]string{io.kubernetes.container.hash: b7102817,io.kubernetes.container.ports: [{\"name\":\"h
ttp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.k
ubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,
},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6
hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fda
d87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&
ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5
b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandb
oxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0
aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.h
ash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ef245229b64797e035804e7b0dbd9ee9a637284585046000393a3a2dfff5171,PodSandboxId:932d6b20747f9ce5e768aed9a3d8ea43972e2f7e9c55ac1588d6ecd4127d2e72,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:655d0b5064a21e59e5336558c38ad36198be12f5c2e23abcd18f192966e3d15c,State:CONTAINER_RUNNING,CreatedAt:1763670216204913097,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-dnx8n,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 144048d7-70cb-4183-850c-037db831f39a,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 1deece16,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: lo
cal-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98895efa4aa643c420478e5c8f298c3bb1fb8e2811b889e362bd9cb5f8101ef4,PodSandboxId:34222e083f1fb9ea60fc3218f0c4599267b6a9aede813a75d1602dcf68caa60d,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1763670148473951731,Labels:map[string]string{io.kubernetes.conta
iner.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-f74ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735676cf-f787-4c40-aea2-353fd6d6c050,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa95
92ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8457eb0e6f4cb4bc79a262d7ecc236575641cb4bb48418b28387cfed69fc606e,PodSandboxId:1f395473e6361691a7ae7d431f265661b7b39f4cf928d6afda09341193b06caf,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a6284319
7e367427efb84d0e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4e5706768198b632e90feae7e51918ffac8898936ee9c3bbcf036f84c8f5ba1,State:CONTAINER_RUNNING,CreatedAt:1763670136172142722,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-6b586f9694-c76hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8341ed8b-de18-404d-9892-7e44cbdd07e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5328bc,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Im
ageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Im
age:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attem
pt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:
map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b40e556e-b9ce-4552-aaff-7e5f34b77975 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.211742847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15b00dac-a636-4bd2-9df9-e52a64d2e9f7 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.211836026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15b00dac-a636-4bd2-9df9-e52a64d2e9f7 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.213460696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3755643f-82df-4dc8-a8db-49a71610a04e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.214646224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670365214619800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3755643f-82df-4dc8-a8db-49a71610a04e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.215854950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb6d6fd9-ece3-40f0-a589-c5a5360fab53 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.215931420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb6d6fd9-ece3-40f0-a589-c5a5360fab53 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.216586319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be53d649027f9e0e02bf97581914af43a58a7a36a5b8437bd99ca04785d0d7f3,PodSandboxId:0917839be797e27d813c026d318aed7a0c2224a292af2a543bc36d80ec3955e9,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1763670363813097738,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29c05b4e-3fc4-459e-ba3d-d0e3414ca257,},Annotations:map[string]string{io.kubernetes.container.hash: b38ca3e1,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a99aebb18d899ffa6c9914bac02fb3d1ace8033478f0364aba37d026287f48,PodSandboxId:9bb14c55ef2edb57e23effce25b7be7728f18cf5a2666f3d53634d38a2c641a8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1763670361644066094,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-jclw2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7a09cdd9-c227-4427-a2b9-b5f32de97ab7,},Annotations:map[string]string{io.kubernetes.container.hash: b7102817,io.kubernetes.container.ports: [{\"name\":\"h
ttp\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.k
ubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,
},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6
hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fda
d87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&
ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5
b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandb
oxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0
aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.h
ash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ef245229b64797e035804e7b0dbd9ee9a637284585046000393a3a2dfff5171,PodSandboxId:932d6b20747f9ce5e768aed9a3d8ea43972e2f7e9c55ac1588d6ecd4127d2e72,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:655d0b5064a21e59e5336558c38ad36198be12f5c2e23abcd18f192966e3d15c,State:CONTAINER_RUNNING,CreatedAt:1763670216204913097,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-dnx8n,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 144048d7-70cb-4183-850c-037db831f39a,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 1deece16,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: lo
cal-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98895efa4aa643c420478e5c8f298c3bb1fb8e2811b889e362bd9cb5f8101ef4,PodSandboxId:34222e083f1fb9ea60fc3218f0c4599267b6a9aede813a75d1602dcf68caa60d,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1763670148473951731,Labels:map[string]string{io.kubernetes.conta
iner.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-f74ln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735676cf-f787-4c40-aea2-353fd6d6c050,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa95
92ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8457eb0e6f4cb4bc79a262d7ecc236575641cb4bb48418b28387cfed69fc606e,PodSandboxId:1f395473e6361691a7ae7d431f265661b7b39f4cf928d6afda09341193b06caf,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a6284319
7e367427efb84d0e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4e5706768198b632e90feae7e51918ffac8898936ee9c3bbcf036f84c8f5ba1,State:CONTAINER_RUNNING,CreatedAt:1763670136172142722,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-6b586f9694-c76hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8341ed8b-de18-404d-9892-7e44cbdd07e3,},Annotations:map[string]string{io.kubernetes.container.hash: 6f5328bc,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Im
ageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Im
age:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attem
pt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.con
tainer.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:
map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb6d6fd9-ece3-40f0-a589-c5a5360fab53 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:26:05 addons-947553 crio[815]: time="2025-11-20 20:26:05.219784339Z" level=debug msg="Request: &ExecSyncRequest{ContainerId:4ef245229b64797e035804e7b0dbd9ee9a637284585046000393a3a2dfff5171,Cmd:[/bin/gadgettracermanager -liveness],Timeout:2,}" file="otel-collector/interceptors.go:62" id=969ba93d-f52e-4cce-80ea-7488602558ee name=/runtime.v1.RuntimeService/ExecSync
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	be53d649027f9       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          1 second ago         Exited              registry-test                            0                   0917839be797e       registry-test                              default
	22a99aebb18d8       ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03                                        3 seconds ago        Running             headlamp                                 0                   9bb14c55ef2ed       headlamp-6945c6f4d-jclw2                   headlamp
	83c7cffc192d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          About a minute ago   Running             busybox                                  0                   30b4f748049f4       busybox                                    default
	1182df9d08d19       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	3c592e1a3ecfd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	a26090ac24452       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	d3d8b65697554       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             About a minute ago   Running             controller                               0                   0a1212c05ea88       ingress-nginx-controller-6c8bf45fb-6hpj8   ingress-nginx
	a781be0336bcb       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	c7f17ef5a5382       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	fb8563d67522d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago        Running             csi-resizer                              0                   367d0442cb7aa       csi-hostpath-resizer-0                     kube-system
	68eba1ff29e5c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             2 minutes ago        Running             csi-attacher                             0                   77498a7d4320e       csi-hostpath-attacher-0                    kube-system
	4189eecca6982       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago        Running             volume-snapshot-controller               0                   64e4a94a11b34       snapshot-controller-7d9fbc56b8-7n9bg       kube-system
	b13c5a7e788c0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago        Running             csi-external-health-monitor-controller   0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	ebdc020b24013       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   2 minutes ago        Exited              patch                                    0                   aab95fc7e29c5       ingress-nginx-admission-patch-xqmtg        ingress-nginx
	30d944607d06d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago        Running             volume-snapshot-controller               0                   f811a556e9729       snapshot-controller-7d9fbc56b8-944pl       kube-system
	cf24d40d09d97       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   2 minutes ago        Exited              create                                   0                   b81a00087e290       ingress-nginx-admission-create-whk72       ingress-nginx
	4ef245229b647       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago        Running             gadget                                   0                   932d6b20747f9       gadget-dnx8n                               gadget
	7581f788bba24       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   402b0cbd3903b       local-path-provisioner-648f6765c9-znfrl    local-path-storage
	98895efa4aa64       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   34222e083f1fb       registry-proxy-f74ln                       kube-system
	3ed48acc4e6b6       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   e08ae02d97821       kube-ingress-dns-minikube                  kube-system
	8457eb0e6f4cb       docker.io/library/registry@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e                                           3 minutes ago        Running             registry                                 0                   1f395473e6361       registry-6b586f9694-c76hc                  kube-system
	1f0a03ae88dd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   7a8aea6b56873       storage-provisioner                        kube-system
	dc04223232fbc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     4 minutes ago        Running             amd-gpu-device-plugin                    0                   1c75fb61317d9       amd-gpu-device-plugin-sl95v                kube-system
	44ea167ad7358       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago        Running             coredns                                  0                   1b8aec92deac0       coredns-66bc5c9577-tpfkd                   kube-system
	107772b7cd302       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   44459bb4c1592       kube-proxy-92nmr                           kube-system
	1d2feff972c82       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   7854300bd65f2       kube-scheduler-addons-947553               kube-system
	3ce144c0d06ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   c0df804390cc3       kube-apiserver-addons-947553               kube-system
	3f04fbc5a9a9d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   c73098b299e79       kube-controller-manager-addons-947553      kube-system
	1b4f51aca4917       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   959ac70855500       etcd-addons-947553                         kube-system
	
	
	==> coredns [44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86] <==
	[INFO] 10.244.0.8:38281 - 13381 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000419309s
	[INFO] 10.244.0.8:38281 - 4239 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000335145s
	[INFO] 10.244.0.8:38281 - 63093 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000099875s
	[INFO] 10.244.0.8:38281 - 4801 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008321s
	[INFO] 10.244.0.8:38281 - 39674 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000264028s
	[INFO] 10.244.0.8:38281 - 62546 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124048s
	[INFO] 10.244.0.8:38281 - 16805 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000647057s
	[INFO] 10.244.0.8:51997 - 13985 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160466s
	[INFO] 10.244.0.8:51997 - 14298 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000220652s
	[INFO] 10.244.0.8:45076 - 61133 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125223s
	[INFO] 10.244.0.8:45076 - 60865 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000152664s
	[INFO] 10.244.0.8:36522 - 44178 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060404s
	[INFO] 10.244.0.8:36522 - 43995 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078705s
	[INFO] 10.244.0.8:59475 - 4219 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116054s
	[INFO] 10.244.0.8:59475 - 4422 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010261s
	[INFO] 10.244.0.23:44890 - 42394 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000390546s
	[INFO] 10.244.0.23:40413 - 38581 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001287022s
	[INFO] 10.244.0.23:48952 - 288 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001963576s
	[INFO] 10.244.0.23:45971 - 54062 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002169261s
	[INFO] 10.244.0.23:46787 - 19498 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139649s
	[INFO] 10.244.0.23:50609 - 21977 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067547s
	[INFO] 10.244.0.23:44756 - 29378 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.005330443s
	[INFO] 10.244.0.23:59657 - 39385 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005346106s
	[INFO] 10.244.0.27:42107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000463345s
	[INFO] 10.244.0.27:53096 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000254044s
	
	
	==> describe nodes <==
	Name:               addons-947553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-947553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-947553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-947553
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-947553"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:21:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-947553
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:26:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:24:49 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:24:49 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:24:49 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:24:49 +0000   Thu, 20 Nov 2025 20:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    addons-947553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ab490c5e4f046af88ecdee8117466b4
	  System UUID:                2ab490c5-e4f0-46af-88ec-dee8117466b4
	  Boot ID:                    1ea0245c-4d70-493b-9a36-f639a36dba5f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  default                     registry-test                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  gadget                      gadget-dnx8n                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  headlamp                    headlamp-6945c6f4d-jclw2                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-6hpj8                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m6s
	  kube-system                 amd-gpu-device-plugin-sl95v                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 coredns-66bc5c9577-tpfkd                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m15s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 csi-hostpathplugin-xtf7r                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 etcd-addons-947553                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m20s
	  kube-system                 kube-apiserver-addons-947553                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-addons-947553                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-92nmr                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-addons-947553                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 registry-6b586f9694-c76hc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 registry-creds-764b6fb674-zvz8q                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 registry-proxy-f74ln                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-7n9bg                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 snapshot-controller-7d9fbc56b8-944pl                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  local-path-storage          helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a    0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  local-path-storage          local-path-provisioner-648f6765c9-znfrl                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-nqz6v                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m13s  kube-proxy       
	  Normal  Starting                 4m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s  kubelet          Node addons-947553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s  kubelet          Node addons-947553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s  kubelet          Node addons-947553 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m19s  kubelet          Node addons-947553 status is now: NodeReady
	  Normal  RegisteredNode           4m16s  node-controller  Node addons-947553 event: Registered Node addons-947553 in Controller
	
	
	==> dmesg <==
	[  +0.082348] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.112626] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.100150] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.135292] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.656096] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.754334] kauditd_printk_skb: 318 callbacks suppressed
	[Nov20 20:22] kauditd_printk_skb: 302 callbacks suppressed
	[  +3.551453] kauditd_printk_skb: 395 callbacks suppressed
	[  +6.168214] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.651247] kauditd_printk_skb: 17 callbacks suppressed
	[Nov20 20:23] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.679825] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.000053] kauditd_printk_skb: 157 callbacks suppressed
	[  +5.059481] kauditd_printk_skb: 109 callbacks suppressed
	[Nov20 20:24] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.445964] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.477031] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.089818] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:25] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.536974] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.509608] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000043] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:26] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45] <==
	{"level":"info","ts":"2025-11-20T20:22:28.934114Z","caller":"traceutil/trace.go:172","msg":"trace[78998058] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:949; }","duration":"118.958555ms","start":"2025-11-20T20:22:28.815146Z","end":"2025-11-20T20:22:28.934105Z","steps":["trace[78998058] 'range keys from in-memory index tree'  (duration: 118.543381ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:22:33.694971Z","caller":"traceutil/trace.go:172","msg":"trace[1388167669] linearizableReadLoop","detail":"{readStateIndex:983; appliedIndex:983; }","duration":"198.598562ms","start":"2025-11-20T20:22:33.496357Z","end":"2025-11-20T20:22:33.694955Z","steps":["trace[1388167669] 'read index received'  (duration: 198.592651ms)","trace[1388167669] 'applied index is now lower than readState.Index'  (duration: 4.872µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:22:33.695109Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.736785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:22:33.695129Z","caller":"traceutil/trace.go:172","msg":"trace[1077703128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:959; }","duration":"198.770833ms","start":"2025-11-20T20:22:33.496353Z","end":"2025-11-20T20:22:33.695123Z","steps":["trace[1077703128] 'agreement among raft nodes before linearized reading'  (duration: 198.704735ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:22:33.695647Z","caller":"traceutil/trace.go:172","msg":"trace[1573417224] transaction","detail":"{read_only:false; response_revision:960; number_of_response:1; }","duration":"208.695382ms","start":"2025-11-20T20:22:33.486941Z","end":"2025-11-20T20:22:33.695637Z","steps":["trace[1573417224] 'process raft request'  (duration: 208.31976ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:23:44.570260Z","caller":"traceutil/trace.go:172","msg":"trace[663488031] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1136; }","duration":"154.066668ms","start":"2025-11-20T20:23:44.416165Z","end":"2025-11-20T20:23:44.570231Z","steps":["trace[663488031] 'read index received'  (duration: 154.021094ms)","trace[663488031] 'applied index is now lower than readState.Index'  (duration: 44.411µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:23:44.570877Z","caller":"traceutil/trace.go:172","msg":"trace[715433296] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"233.967936ms","start":"2025-11-20T20:23:44.336900Z","end":"2025-11-20T20:23:44.570868Z","steps":["trace[715433296] 'process raft request'  (duration: 233.871288ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571611Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.483381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:23:44.571673Z","caller":"traceutil/trace.go:172","msg":"trace[884414279] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1098; }","duration":"111.548598ms","start":"2025-11-20T20:23:44.460117Z","end":"2025-11-20T20:23:44.571666Z","steps":["trace[884414279] 'agreement among raft nodes before linearized reading'  (duration: 111.465445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571061Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.869609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.80\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-11-20T20:23:44.571810Z","caller":"traceutil/trace.go:172","msg":"trace[1446846650] range","detail":"{range_begin:/registry/masterleases/192.168.39.80; range_end:; response_count:1; response_revision:1098; }","duration":"155.64428ms","start":"2025-11-20T20:23:44.416161Z","end":"2025-11-20T20:23:44.571805Z","steps":["trace[1446846650] 'agreement among raft nodes before linearized reading'  (duration: 154.810085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:23:46.528477Z","caller":"traceutil/trace.go:172","msg":"trace[982384876] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"154.809492ms","start":"2025-11-20T20:23:46.373650Z","end":"2025-11-20T20:23:46.528459Z","steps":["trace[982384876] 'process raft request'  (duration: 154.328485ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.123570Z","caller":"traceutil/trace.go:172","msg":"trace[1335763238] linearizableReadLoop","detail":"{readStateIndex:1253; appliedIndex:1253; }","duration":"134.10576ms","start":"2025-11-20T20:24:24.989438Z","end":"2025-11-20T20:24:25.123544Z","steps":["trace[1335763238] 'read index received'  (duration: 134.100119ms)","trace[1335763238] 'applied index is now lower than readState.Index'  (duration: 5.092µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:25.123838Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.381481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-11-20T20:24:25.123864Z","caller":"traceutil/trace.go:172","msg":"trace[1178674559] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1204; }","duration":"134.473479ms","start":"2025-11-20T20:24:24.989384Z","end":"2025-11-20T20:24:25.123857Z","steps":["trace[1178674559] 'agreement among raft nodes before linearized reading'  (duration: 134.302699ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:24:25.124126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.465459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:25.124145Z","caller":"traceutil/trace.go:172","msg":"trace[392254424] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1205; }","duration":"131.486967ms","start":"2025-11-20T20:24:24.992652Z","end":"2025-11-20T20:24:25.124139Z","steps":["trace[392254424] 'agreement among raft nodes before linearized reading'  (duration: 131.453666ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.124311Z","caller":"traceutil/trace.go:172","msg":"trace[1682962710] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"237.606056ms","start":"2025-11-20T20:24:24.886699Z","end":"2025-11-20T20:24:25.124305Z","steps":["trace[1682962710] 'process raft request'  (duration: 237.320378ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.314678Z","caller":"traceutil/trace.go:172","msg":"trace[1797119853] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1279; }","duration":"155.702658ms","start":"2025-11-20T20:24:29.158960Z","end":"2025-11-20T20:24:29.314662Z","steps":["trace[1797119853] 'read index received'  (duration: 155.696769ms)","trace[1797119853] 'applied index is now lower than readState.Index'  (duration: 4.683µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:29.314797Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.822209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:29.314815Z","caller":"traceutil/trace.go:172","msg":"trace[163313341] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1230; }","duration":"155.853309ms","start":"2025-11-20T20:24:29.158956Z","end":"2025-11-20T20:24:29.314809Z","steps":["trace[163313341] 'agreement among raft nodes before linearized reading'  (duration: 155.793828ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.315341Z","caller":"traceutil/trace.go:172","msg":"trace[932727743] transaction","detail":"{read_only:false; response_revision:1231; number_of_response:1; }","duration":"158.601334ms","start":"2025-11-20T20:24:29.156731Z","end":"2025-11-20T20:24:29.315333Z","steps":["trace[932727743] 'process raft request'  (duration: 158.264408ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.860975Z","caller":"traceutil/trace.go:172","msg":"trace[570114600] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"232.699788ms","start":"2025-11-20T20:24:38.628262Z","end":"2025-11-20T20:24:38.860962Z","steps":["trace[570114600] 'process raft request'  (duration: 232.584342ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.862428Z","caller":"traceutil/trace.go:172","msg":"trace[1632150606] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"194.825132ms","start":"2025-11-20T20:24:38.667594Z","end":"2025-11-20T20:24:38.862419Z","steps":["trace[1632150606] 'process raft request'  (duration: 194.764757ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:25:59.796917Z","caller":"traceutil/trace.go:172","msg":"trace[1018787678] transaction","detail":"{read_only:false; response_revision:1587; number_of_response:1; }","duration":"178.519957ms","start":"2025-11-20T20:25:59.618371Z","end":"2025-11-20T20:25:59.796891Z","steps":["trace[1018787678] 'process raft request'  (duration: 178.419059ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:26:05 up 4 min,  0 users,  load average: 4.07, 2.44, 1.05
	Linux addons-947553 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2] <==
	W1120 20:22:19.641955       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1120 20:22:19.659364       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1120 20:23:00.364766       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:00.364849       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 20:23:00.364867       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1120 20:23:00.365762       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:00.365790       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1120 20:23:00.366969       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1120 20:23:34.247008       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	W1120 20:23:34.253741       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:34.253819       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1120 20:23:34.256485       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	E1120 20:23:34.259388       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	E1120 20:23:34.271232       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	I1120 20:23:34.434058       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1120 20:24:45.470175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50698: use of closed network connection
	E1120 20:24:45.698946       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50724: use of closed network connection
	I1120 20:24:55.153735       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.73.86"}
	I1120 20:25:35.271669       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be] <==
	I1120 20:21:49.546097       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-947553"
	I1120 20:21:49.546178       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 20:21:49.551177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 20:21:49.551353       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:21:49.558938       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:21:49.560164       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:21:49.564482       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:21:49.572448       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:21:49.574897       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:21:49.579336       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:21:54.678834       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1120 20:21:58.672593       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1120 20:22:19.544397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:22:19.546674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1120 20:22:19.546720       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1120 20:22:19.600217       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1120 20:22:19.618675       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 20:22:19.646978       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:22:19.720013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1120 20:22:49.656241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:22:49.730478       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 20:23:19.661239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:23:19.740631       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 20:24:55.213061       1 replica_set.go:587] "Unhandled Error" err="sync \"headlamp/headlamp-6945c6f4d\" failed with pods \"headlamp-6945c6f4d-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I1120 20:24:58.991066       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	
	
	==> kube-proxy [107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf] <==
	I1120 20:21:51.944081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:21:52.047283       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:21:52.059178       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1120 20:21:52.063486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:21:52.317013       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:21:52.317608       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:21:52.319592       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:21:52.353676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:21:52.353988       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:21:52.354004       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:21:52.365989       1 config.go:200] "Starting service config controller"
	I1120 20:21:52.366010       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:21:52.373413       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:21:52.373476       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:21:52.373601       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:21:52.373606       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:21:52.404955       1 config.go:309] "Starting node config controller"
	I1120 20:21:52.405179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:21:52.405460       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:21:52.474183       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:21:52.474283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:21:52.570175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b] <==
	E1120 20:21:42.658146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:42.658289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:42.658479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:42.659065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:21:42.659191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:42.659355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:42.659676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:21:42.660629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:43.501696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:21:43.568808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:21:43.596853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:43.607731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:21:43.612970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:21:43.637766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:21:43.650165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:43.687838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:21:43.786838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:43.825959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:21:43.878175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:43.895745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:43.953162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:21:43.991210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:44.021889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:21:44.053100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 20:21:46.731200       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:25:25 addons-947553 kubelet[1518]: E1120 20:25:25.616581    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670325616067228  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:461956}  inodes_used:{value:166}}"
	Nov 20 20:25:25 addons-947553 kubelet[1518]: E1120 20:25:25.616631    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670325616067228  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:461956}  inodes_used:{value:166}}"
	Nov 20 20:25:29 addons-947553 kubelet[1518]: I1120 20:25:29.401009    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw89l\" (UniqueName: \"kubernetes.io/projected/3fabe4f4-d0a9-40fe-a635-e27af546a8ce-kube-api-access-mw89l\") pod \"task-pv-pod\" (UID: \"3fabe4f4-d0a9-40fe-a635-e27af546a8ce\") " pod="default/task-pv-pod"
	Nov 20 20:25:29 addons-947553 kubelet[1518]: I1120 20:25:29.401173    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f319206-7bda-4a24-a80d-ac987afb3775\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0e240b9d-c64f-11f0-b3a1-2ada7a71e2df\") pod \"task-pv-pod\" (UID: \"3fabe4f4-d0a9-40fe-a635-e27af546a8ce\") " pod="default/task-pv-pod"
	Nov 20 20:25:29 addons-947553 kubelet[1518]: I1120 20:25:29.533677    1518 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-0f319206-7bda-4a24-a80d-ac987afb3775\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0e240b9d-c64f-11f0-b3a1-2ada7a71e2df\") pod \"task-pv-pod\" (UID: \"3fabe4f4-d0a9-40fe-a635-e27af546a8ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/777d42b7e77a189a4c4ca309d8b9956dbb0ca516ae882b02b85b72b1104d203a/globalmount\"" pod="default/task-pv-pod"
	Nov 20 20:25:35 addons-947553 kubelet[1518]: E1120 20:25:35.619972    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670335619343948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:461956}  inodes_used:{value:166}}"
	Nov 20 20:25:35 addons-947553 kubelet[1518]: E1120 20:25:35.620003    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670335619343948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:461956}  inodes_used:{value:166}}"
	Nov 20 20:25:37 addons-947553 kubelet[1518]: E1120 20:25:37.336557    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-nqz6v" podUID="3ac69ce1-c8e4-478b-bc45-5b450445f539"
	Nov 20 20:25:44 addons-947553 kubelet[1518]: I1120 20:25:44.331344    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:25:45 addons-947553 kubelet[1518]: E1120 20:25:45.623368    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670345622693341  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:461956}  inodes_used:{value:166}}"
	Nov 20 20:25:45 addons-947553 kubelet[1518]: E1120 20:25:45.623396    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670345622693341  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:461956}  inodes_used:{value:166}}"
	Nov 20 20:25:48 addons-947553 kubelet[1518]: E1120 20:25:48.332103    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-nqz6v" podUID="3ac69ce1-c8e4-478b-bc45-5b450445f539"
	Nov 20 20:25:55 addons-947553 kubelet[1518]: E1120 20:25:55.626785    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670355626219283  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:461956}  inodes_used:{value:166}}"
	Nov 20 20:25:55 addons-947553 kubelet[1518]: E1120 20:25:55.626834    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670355626219283  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:461956}  inodes_used:{value:166}}"
	Nov 20 20:25:56 addons-947553 kubelet[1518]: E1120 20:25:56.635309    1518 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 20 20:25:56 addons-947553 kubelet[1518]: E1120 20:25:56.635361    1518 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 20 20:25:56 addons-947553 kubelet[1518]: E1120 20:25:56.635617    1518 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a_local-path-storage(7817d1f3-8bad-4399-a504-42ce19947059): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:25:56 addons-947553 kubelet[1518]: E1120 20:25:56.635660    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a" podUID="7817d1f3-8bad-4399-a504-42ce19947059"
	Nov 20 20:25:57 addons-947553 kubelet[1518]: E1120 20:25:57.622805    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a" podUID="7817d1f3-8bad-4399-a504-42ce19947059"
	Nov 20 20:26:00 addons-947553 kubelet[1518]: I1120 20:26:00.332821    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sl95v" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:26:01 addons-947553 kubelet[1518]: I1120 20:26:01.829694    1518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-6945c6f4d-jclw2" podStartSLOduration=1.019647818 podStartE2EDuration="1m6.829672658s" podCreationTimestamp="2025-11-20 20:24:55 +0000 UTC" firstStartedPulling="2025-11-20 20:24:55.811402769 +0000 UTC m=+190.627341378" lastFinishedPulling="2025-11-20 20:26:01.62142761 +0000 UTC m=+256.437366218" observedRunningTime="2025-11-20 20:26:01.827808038 +0000 UTC m=+256.643746664" watchObservedRunningTime="2025-11-20 20:26:01.829672658 +0000 UTC m=+256.645611286"
	Nov 20 20:26:04 addons-947553 kubelet[1518]: E1120 20:26:04.348938    1518 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 20 20:26:04 addons-947553 kubelet[1518]: E1120 20:26:04.349047    1518 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d25f917-4040-4b9c-8bac-9d75a55b633d-gcr-creds podName:1d25f917-4040-4b9c-8bac-9d75a55b633d nodeName:}" failed. No retries permitted until 2025-11-20 20:28:06.34902287 +0000 UTC m=+381.164961482 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/1d25f917-4040-4b9c-8bac-9d75a55b633d-gcr-creds") pod "registry-creds-764b6fb674-zvz8q" (UID: "1d25f917-4040-4b9c-8bac-9d75a55b633d") : secret "registry-creds-gcr" not found
	Nov 20 20:26:05 addons-947553 kubelet[1518]: E1120 20:26:05.630866    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670365629561765  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:26:05 addons-947553 kubelet[1518]: E1120 20:26:05.631456    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670365629561765  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	
	
	==> storage-provisioner [1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806] <==
	W1120 20:25:41.774469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:43.779197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:43.789832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:45.794579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:45.808929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:47.813431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:47.820014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:49.823239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:49.832321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:51.836107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:51.841588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:53.844840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:53.852785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:55.856864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:55.864086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:57.869293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:57.885563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:59.891204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:25:59.902200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:26:01.906184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:26:01.938067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:26:03.942391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:26:03.955237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:26:05.961328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:26:05.974874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-947553 -n addons-947553
helpers_test.go:269: (dbg) Run:  kubectl --context addons-947553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: registry-test task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg registry-creds-764b6fb674-zvz8q helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a yakd-dashboard-5ff678cb9-nqz6v
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-947553 describe pod registry-test task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg registry-creds-764b6fb674-zvz8q helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a yakd-dashboard-5ff678cb9-nqz6v
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-947553 describe pod registry-test task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg registry-creds-764b6fb674-zvz8q helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a yakd-dashboard-5ff678cb9-nqz6v: exit status 1 (90.979958ms)

                                                
                                                
-- stdout --
	Name:             registry-test
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:25:04 +0000
	Labels:           run=registry-test
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  registry-test:
	    Container ID:  cri-o://be53d649027f9e0e02bf97581914af43a58a7a36a5b8437bd99ca04785d0d7f3
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 20 Nov 2025 20:26:03 +0000
	      Finished:     Thu, 20 Nov 2025 20:26:03 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vv8tt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vv8tt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  62s   default-scheduler  Successfully assigned default/registry-test to addons-947553
	  Normal  Pulling    62s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	  Normal  Pulled     3s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox" in 2.162s (58.951s including waiting). Image size: 1462480 bytes.
	  Normal  Created    3s    kubelet            Created container: registry-test
	  Normal  Started    3s    kubelet            Started container registry-test
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:25:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw89l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mw89l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  37s   default-scheduler  Successfully assigned default/task-pv-pod to addons-947553
	  Normal  Pulling    37s   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7w87 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-w7w87:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-whk72" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xqmtg" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-zvz8q" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-nqz6v" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-947553 describe pod registry-test task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg registry-creds-764b6fb674-zvz8q helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a yakd-dashboard-5ff678cb9-nqz6v: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (73.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-947553 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-947553 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-947553 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [261f896c-810b-4000-a18d-13ad1a4b0967] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-947553 -n addons-947553
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-11-20 20:34:08.435747479 +0000 UTC m=+797.322036232
addons_test.go:252: (dbg) Run:  kubectl --context addons-947553 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-947553 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-947553/192.168.39.80
Start Time:       Thu, 20 Nov 2025 20:26:08 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8bvn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-s8bvn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m                     default-scheduler  Successfully assigned default/nginx to addons-947553
Warning  Failed     2m34s (x3 over 6m34s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    101s (x4 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     41s (x4 over 6m34s)    kubelet            Error: ErrImagePull
Warning  Failed     41s                    kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2s (x8 over 6m34s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2s (x8 over 6m34s)     kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-947553 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-947553 logs nginx -n default: exit status 1 (74.791414ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-947553 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-947553 -n addons-947553
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 logs -n 25: (1.194170319s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ --download-only -p binary-mirror-717684 --alsologtostderr --binary-mirror http://127.0.0.1:46607 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ -p binary-mirror-717684                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ addons  │ disable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ addons  │ enable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ start   │ -p addons-947553 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ enable headlamp -p addons-947553 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ ip      │ addons-947553 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                         │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:28 UTC │ 20 Nov 25 20:28 UTC │
	│ addons  │ addons-947553 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │ 20 Nov 25 20:30 UTC │
	│ addons  │ addons-947553 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │ 20 Nov 25 20:31 UTC │
	│ addons  │ addons-947553 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │ 20 Nov 25 20:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:04.799759    8315 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:04.799869    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.799880    8315 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:04.799886    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.800101    8315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:21:04.800589    8315 out.go:368] Setting JSON to false
	I1120 20:21:04.801389    8315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":215,"bootTime":1763669850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:21:04.801502    8315 start.go:143] virtualization: kvm guest
	I1120 20:21:04.803491    8315 out.go:179] * [addons-947553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:21:04.804816    8315 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:21:04.804809    8315 notify.go:221] Checking for updates...
	I1120 20:21:04.807406    8315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:04.808794    8315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:04.810101    8315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:04.811420    8315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:21:04.812487    8315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:21:04.813679    8315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:21:04.845057    8315 out.go:179] * Using the kvm2 driver based on user configuration
	I1120 20:21:04.846216    8315 start.go:309] selected driver: kvm2
	I1120 20:21:04.846231    8315 start.go:930] validating driver "kvm2" against <nil>
	I1120 20:21:04.846241    8315 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:21:04.846961    8315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:21:04.847180    8315 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:21:04.847211    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:04.847249    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:04.847263    8315 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1120 20:21:04.847320    8315 start.go:353] cluster config:
	{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1120 20:21:04.847407    8315 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:21:04.848659    8315 out.go:179] * Starting "addons-947553" primary control-plane node in "addons-947553" cluster
	I1120 20:21:04.849659    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:04.849691    8315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:21:04.849701    8315 cache.go:65] Caching tarball of preloaded images
	I1120 20:21:04.849792    8315 preload.go:238] Found /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:21:04.849803    8315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:21:04.850086    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:04.850110    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json: {Name:mk61841fddacaf75a98d91c699b32f9aeeaf9c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:04.850231    8315 start.go:360] acquireMachinesLock for addons-947553: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 20:21:04.850284    8315 start.go:364] duration metric: took 40.752µs to acquireMachinesLock for "addons-947553"
	I1120 20:21:04.850302    8315 start.go:93] Provisioning new machine with config: &{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:04.850352    8315 start.go:125] createHost starting for "" (driver="kvm2")
	I1120 20:21:04.852328    8315 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1120 20:21:04.852480    8315 start.go:159] libmachine.API.Create for "addons-947553" (driver="kvm2")
	I1120 20:21:04.852506    8315 client.go:173] LocalClient.Create starting
	I1120 20:21:04.852580    8315 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem
	I1120 20:21:05.105122    8315 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem
	I1120 20:21:05.182169    8315 main.go:143] libmachine: creating domain...
	I1120 20:21:05.182188    8315 main.go:143] libmachine: creating network...
	I1120 20:21:05.183682    8315 main.go:143] libmachine: found existing default network
	I1120 20:21:05.183926    8315 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.184462    8315 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d98350}
	I1120 20:21:05.184549    8315 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-947553</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.190086    8315 main.go:143] libmachine: creating private network mk-addons-947553 192.168.39.0/24...
	I1120 20:21:05.255182    8315 main.go:143] libmachine: private network mk-addons-947553 192.168.39.0/24 created
	I1120 20:21:05.255605    8315 main.go:143] libmachine: <network>
	  <name>mk-addons-947553</name>
	  <uuid>aa8efef2-a4fa-46da-99ec-8e728046a9cf</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:9d:8a:68'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.255642    8315 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.255667    8315 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1120 20:21:05.255686    8315 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.255775    8315 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21923-3793/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1120 20:21:05.515325    8315 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa...
	I1120 20:21:05.718020    8315 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk...
	I1120 20:21:05.718065    8315 main.go:143] libmachine: Writing magic tar header
	I1120 20:21:05.718104    8315 main.go:143] libmachine: Writing SSH key tar header
	I1120 20:21:05.718203    8315 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.718284    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553
	I1120 20:21:05.718335    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 (perms=drwx------)
	I1120 20:21:05.718363    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines
	I1120 20:21:05.718383    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines (perms=drwxr-xr-x)
	I1120 20:21:05.718404    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.718421    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube (perms=drwxr-xr-x)
	I1120 20:21:05.718438    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793
	I1120 20:21:05.718456    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793 (perms=drwxrwxr-x)
	I1120 20:21:05.718473    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1120 20:21:05.718490    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1120 20:21:05.718505    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1120 20:21:05.718521    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1120 20:21:05.718536    8315 main.go:143] libmachine: checking permissions on dir: /home
	I1120 20:21:05.718549    8315 main.go:143] libmachine: skipping /home - not owner
	I1120 20:21:05.718557    8315 main.go:143] libmachine: defining domain...
	I1120 20:21:05.719886    8315 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:05.727760    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:79:1f:b5 in network default
	I1120 20:21:05.728410    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:05.728434    8315 main.go:143] libmachine: starting domain...
	I1120 20:21:05.728441    8315 main.go:143] libmachine: ensuring networks are active...
	I1120 20:21:05.729136    8315 main.go:143] libmachine: Ensuring network default is active
	I1120 20:21:05.729504    8315 main.go:143] libmachine: Ensuring network mk-addons-947553 is active
	I1120 20:21:05.730087    8315 main.go:143] libmachine: getting domain XML...
	I1120 20:21:05.731121    8315 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <uuid>2ab490c5-e4f0-46af-88ec-dee8117466b4</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:a7:2c'/>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:79:1f:b5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:07.012614    8315 main.go:143] libmachine: waiting for domain to start...
	I1120 20:21:07.013937    8315 main.go:143] libmachine: domain is now running
	I1120 20:21:07.013958    8315 main.go:143] libmachine: waiting for IP...
	I1120 20:21:07.014713    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.015361    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.015380    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.015661    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.015708    8315 retry.go:31] will retry after 270.684091ms: waiting for domain to come up
	I1120 20:21:07.288186    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.288839    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.288865    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.289198    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.289247    8315 retry.go:31] will retry after 384.258097ms: waiting for domain to come up
	I1120 20:21:07.674731    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.675347    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.675362    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.675602    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.675642    8315 retry.go:31] will retry after 325.268494ms: waiting for domain to come up
	I1120 20:21:08.002089    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.002712    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.002729    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.003011    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.003044    8315 retry.go:31] will retry after 532.953777ms: waiting for domain to come up
	I1120 20:21:08.537708    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.538539    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.538554    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.538839    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.538878    8315 retry.go:31] will retry after 671.32775ms: waiting for domain to come up
	I1120 20:21:09.212032    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.212741    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.212765    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.213102    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.213142    8315 retry.go:31] will retry after 640.716702ms: waiting for domain to come up
	I1120 20:21:09.855420    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.856063    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.856083    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.856391    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.856428    8315 retry.go:31] will retry after 715.495515ms: waiting for domain to come up
	I1120 20:21:10.573053    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:10.573668    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:10.573685    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:10.574006    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:10.574049    8315 retry.go:31] will retry after 1.386473849s: waiting for domain to come up
	I1120 20:21:11.962706    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:11.963438    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:11.963454    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:11.963745    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:11.963779    8315 retry.go:31] will retry after 1.671471747s: waiting for domain to come up
	I1120 20:21:13.637832    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:13.638601    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:13.638620    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:13.639009    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:13.639040    8315 retry.go:31] will retry after 1.524844768s: waiting for domain to come up
	I1120 20:21:15.165792    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:15.166517    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:15.166555    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:15.166908    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:15.166949    8315 retry.go:31] will retry after 2.171556586s: waiting for domain to come up
	I1120 20:21:17.341326    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:17.341989    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:17.342008    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:17.342371    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:17.342408    8315 retry.go:31] will retry after 2.613437366s: waiting for domain to come up
	I1120 20:21:19.957329    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:19.958097    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:19.958115    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:19.958466    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:19.958501    8315 retry.go:31] will retry after 4.105323605s: waiting for domain to come up
	I1120 20:21:24.068938    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069767    8315 main.go:143] libmachine: domain addons-947553 has current primary IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069790    8315 main.go:143] libmachine: found domain IP: 192.168.39.80
	I1120 20:21:24.069802    8315 main.go:143] libmachine: reserving static IP address...
	I1120 20:21:24.070350    8315 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-947553", mac: "52:54:00:7b:a7:2c", ip: "192.168.39.80"} in network mk-addons-947553
	I1120 20:21:24.251658    8315 main.go:143] libmachine: reserved static IP address 192.168.39.80 for domain addons-947553
	I1120 20:21:24.251676    8315 main.go:143] libmachine: waiting for SSH...
	I1120 20:21:24.251682    8315 main.go:143] libmachine: Getting to WaitForSSH function...
	I1120 20:21:24.254839    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255480    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.255507    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255698    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.255932    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.255946    8315 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1120 20:21:24.357511    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.357947    8315 main.go:143] libmachine: domain creation complete
	I1120 20:21:24.359373    8315 machine.go:94] provisionDockerMachine start ...
	I1120 20:21:24.361503    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.361927    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.361949    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.362121    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.362368    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.362381    8315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:21:24.462018    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1120 20:21:24.462045    8315 buildroot.go:166] provisioning hostname "addons-947553"
	I1120 20:21:24.464884    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465302    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.465327    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465556    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.465783    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.465796    8315 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-947553 && echo "addons-947553" | sudo tee /etc/hostname
	I1120 20:21:24.590591    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947553
	
	I1120 20:21:24.593332    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593716    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.593739    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593959    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.594201    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.594220    8315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-947553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-947553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-947553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:21:24.704349    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.704375    8315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 20:21:24.704425    8315 buildroot.go:174] setting up certificates
	I1120 20:21:24.704437    8315 provision.go:84] configureAuth start
	I1120 20:21:24.707018    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.707382    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.707405    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709518    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709819    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.709844    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709960    8315 provision.go:143] copyHostCerts
	I1120 20:21:24.710021    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 20:21:24.710131    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 20:21:24.710204    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 20:21:24.710279    8315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.addons-947553 san=[127.0.0.1 192.168.39.80 addons-947553 localhost minikube]
	I1120 20:21:24.868893    8315 provision.go:177] copyRemoteCerts
	I1120 20:21:24.868955    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:21:24.871421    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.871778    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.871813    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.872001    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:24.954555    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:21:24.986020    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:21:25.016669    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:21:25.046712    8315 provision.go:87] duration metric: took 342.262806ms to configureAuth
	I1120 20:21:25.046739    8315 buildroot.go:189] setting minikube options for container-runtime
	I1120 20:21:25.046974    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:25.049642    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050132    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.050155    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050331    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.050555    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.050571    8315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:21:25.295480    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:21:25.295505    8315 machine.go:97] duration metric: took 936.115627ms to provisionDockerMachine
	I1120 20:21:25.295517    8315 client.go:176] duration metric: took 20.443004703s to LocalClient.Create
	I1120 20:21:25.295530    8315 start.go:167] duration metric: took 20.443049547s to libmachine.API.Create "addons-947553"
	I1120 20:21:25.295539    8315 start.go:293] postStartSetup for "addons-947553" (driver="kvm2")
	I1120 20:21:25.295551    8315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:21:25.295599    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:21:25.298453    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.298889    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.298912    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.299118    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.380706    8315 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:21:25.386067    8315 info.go:137] Remote host: Buildroot 2025.02
	I1120 20:21:25.386096    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 20:21:25.386163    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 20:21:25.386186    8315 start.go:296] duration metric: took 90.641008ms for postStartSetup
	I1120 20:21:25.389037    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389412    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.389432    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389654    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:25.389819    8315 start.go:128] duration metric: took 20.539459484s to createHost
	I1120 20:21:25.392104    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392481    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.392504    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392693    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.392952    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.392965    8315 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 20:21:25.493567    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763670085.456620738
	
	I1120 20:21:25.493591    8315 fix.go:216] guest clock: 1763670085.456620738
	I1120 20:21:25.493598    8315 fix.go:229] Guest: 2025-11-20 20:21:25.456620738 +0000 UTC Remote: 2025-11-20 20:21:25.389830223 +0000 UTC m=+20.636741018 (delta=66.790515ms)
	I1120 20:21:25.493614    8315 fix.go:200] guest clock delta is within tolerance: 66.790515ms
	I1120 20:21:25.493618    8315 start.go:83] releasing machines lock for "addons-947553", held for 20.643324737s
	I1120 20:21:25.496394    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.496731    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.496750    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.497416    8315 ssh_runner.go:195] Run: cat /version.json
	I1120 20:21:25.497480    8315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:21:25.500666    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.500828    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501105    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501135    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501175    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501196    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501333    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.501488    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.605393    8315 ssh_runner.go:195] Run: systemctl --version
	I1120 20:21:25.612006    8315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:21:25.772800    8315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:21:25.780223    8315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:21:25.780282    8315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:21:25.801102    8315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:21:25.801129    8315 start.go:496] detecting cgroup driver to use...
	I1120 20:21:25.801204    8315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:21:25.821353    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:21:25.843177    8315 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:21:25.843231    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:21:25.868522    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:21:25.885911    8315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:21:26.035325    8315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:21:26.252665    8315 docker.go:234] disabling docker service ...
	I1120 20:21:26.252745    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:21:26.269964    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:21:26.285883    8315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:21:26.444730    8315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:21:26.588236    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:21:26.605731    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:21:26.631197    8315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:21:26.631278    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.644989    8315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 20:21:26.645074    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.659053    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.672870    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.687322    8315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:21:26.702284    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.716913    8315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.738871    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.752362    8315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:21:26.763831    8315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1120 20:21:26.763912    8315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1120 20:21:26.789002    8315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:21:26.803924    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:26.952317    8315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:21:27.200343    8315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:21:27.200435    8315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:21:27.206384    8315 start.go:564] Will wait 60s for crictl version
	I1120 20:21:27.206464    8315 ssh_runner.go:195] Run: which crictl
	I1120 20:21:27.211256    8315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 20:21:27.250686    8315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 20:21:27.250789    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.281244    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.453589    8315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1120 20:21:27.519790    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520199    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:27.520222    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520413    8315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1120 20:21:27.525676    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:27.542910    8315 kubeadm.go:884] updating cluster {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:21:27.543059    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:27.543129    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:27.574818    8315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 20:21:27.574926    8315 ssh_runner.go:195] Run: which lz4
	I1120 20:21:27.580276    8315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1120 20:21:27.587089    8315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1120 20:21:27.587120    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1120 20:21:29.151749    8315 crio.go:462] duration metric: took 1.571528535s to copy over tarball
	I1120 20:21:29.151825    8315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1120 20:21:30.840010    8315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.688159594s)
	I1120 20:21:30.840053    8315 crio.go:469] duration metric: took 1.688277204s to extract the tarball
	I1120 20:21:30.840061    8315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1120 20:21:30.882678    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:30.922657    8315 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:21:30.922680    8315 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:21:30.922687    8315 kubeadm.go:935] updating node { 192.168.39.80 8443 v1.34.1 crio true true} ...
	I1120 20:21:30.922783    8315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-947553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:21:30.922874    8315 ssh_runner.go:195] Run: crio config
	I1120 20:21:30.970750    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:30.970771    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:30.970787    8315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:21:30.970807    8315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-947553 NodeName:addons-947553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:21:30.970921    8315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-947553"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.80"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:21:30.970978    8315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:21:30.984115    8315 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:21:30.984179    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:21:30.997000    8315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1120 20:21:31.019490    8315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:21:31.040334    8315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1120 20:21:31.062447    8315 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I1120 20:21:31.066873    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:31.082252    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:31.225462    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:31.260197    8315 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553 for IP: 192.168.39.80
	I1120 20:21:31.260217    8315 certs.go:195] generating shared ca certs ...
	I1120 20:21:31.260232    8315 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.260386    8315 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 20:21:31.565609    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt ...
	I1120 20:21:31.565637    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt: {Name:mkbaf0e14aa61a2ff1b23e3cacd2c256e32e6647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565863    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key ...
	I1120 20:21:31.565878    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key: {Name:mk6aeca1c4b3f3e4ff969d4a1bc1fecc4b0fa343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565998    8315 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 20:21:32.272316    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt ...
	I1120 20:21:32.272345    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt: {Name:mk6e855dc2ded0db05a3455c6e851abbeb92043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272564    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key ...
	I1120 20:21:32.272590    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key: {Name:mkc4fdf928a4209309cd887410d07a4fb9cad8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272702    8315 certs.go:257] generating profile certs ...
	I1120 20:21:32.272778    8315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key
	I1120 20:21:32.272805    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt with IP's: []
	I1120 20:21:32.531299    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt ...
	I1120 20:21:32.531330    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: {Name:mkacef1d43c5fe9ffb1d09b61b8a2a7db2cf094d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531547    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key ...
	I1120 20:21:32.531568    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key: {Name:mk2cb4e6b2267fb750aa726a4e65ebdfb9212cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531675    8315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2
	I1120 20:21:32.531704    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80]
	I1120 20:21:32.818886    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 ...
	I1120 20:21:32.818915    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2: {Name:mk790b39b3d9776066f9b6fb58232a0c1fea8994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819086    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 ...
	I1120 20:21:32.819099    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2: {Name:mk4563c621ceba8c563d34ed8d2ea6985bc21d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819174    8315 certs.go:382] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt
	I1120 20:21:32.819257    8315 certs.go:386] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key
	I1120 20:21:32.819305    8315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key
	I1120 20:21:32.819322    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt with IP's: []
	I1120 20:21:33.229266    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt ...
	I1120 20:21:33.229303    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt: {Name:mk842c9b1c7d59553f9e9969540d37e3f124f603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229499    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key ...
	I1120 20:21:33.229519    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key: {Name:mk774bcb76c9d8c8959c52bd40c6db81e671bce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229746    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 20:21:33.229789    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:21:33.229825    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:21:33.229867    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 20:21:33.230425    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:21:33.262117    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:21:33.298274    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:21:33.335705    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:21:33.369053    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 20:21:33.401973    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:21:33.434941    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:21:33.467052    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:21:33.499463    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:21:33.533326    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:21:33.557271    8315 ssh_runner.go:195] Run: openssl version
	I1120 20:21:33.565199    8315 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.579252    8315 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:21:33.592359    8315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598287    8315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598357    8315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.606765    8315 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:21:33.620434    8315 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:21:33.633673    8315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:21:33.639557    8315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:21:33.639640    8315 kubeadm.go:401] StartCluster: {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:21:33.639719    8315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:21:33.639785    8315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:21:33.678141    8315 cri.go:89] found id: ""
	I1120 20:21:33.678230    8315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:21:33.692525    8315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:21:33.705815    8315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:21:33.718541    8315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:21:33.718560    8315 kubeadm.go:158] found existing configuration files:
	
	I1120 20:21:33.718602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:21:33.730012    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:21:33.730084    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:21:33.742602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:21:33.754750    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:21:33.754833    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:21:33.773694    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.789522    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:21:33.789573    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.803646    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:21:33.817663    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:21:33.817714    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:21:33.830895    8315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1120 20:21:34.010421    8315 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:21:45.965962    8315 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:21:45.966043    8315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:21:45.966134    8315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:21:45.966274    8315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:21:45.966402    8315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:21:45.966485    8315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:21:45.968313    8315 out.go:252]   - Generating certificates and keys ...
	I1120 20:21:45.968415    8315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:21:45.968512    8315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:21:45.968625    8315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:21:45.968701    8315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:21:45.968754    8315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:21:45.968819    8315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:21:45.968913    8315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:21:45.969101    8315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969192    8315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:21:45.969314    8315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969371    8315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:21:45.969421    8315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:21:45.969458    8315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:21:45.969504    8315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:21:45.969545    8315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:21:45.969595    8315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:21:45.969637    8315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:21:45.969697    8315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:21:45.969754    8315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:21:45.969823    8315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:21:45.969888    8315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:21:45.971245    8315 out.go:252]   - Booting up control plane ...
	I1120 20:21:45.971330    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:21:45.971396    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:21:45.971453    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:21:45.971554    8315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:21:45.971660    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:21:45.971754    8315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:21:45.971826    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:21:45.971880    8315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:21:45.972014    8315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:21:45.972174    8315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:21:45.972260    8315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.915384ms
	I1120 20:21:45.972339    8315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:21:45.972417    8315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.80:8443/livez
	I1120 20:21:45.972499    8315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:21:45.972565    8315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:21:45.972626    8315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.009474334s
	I1120 20:21:45.972680    8315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.600510793s
	I1120 20:21:45.972745    8315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502310178s
	I1120 20:21:45.972837    8315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:21:45.972964    8315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:21:45.973026    8315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:21:45.973213    8315 kubeadm.go:319] [mark-control-plane] Marking the node addons-947553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:21:45.973262    8315 kubeadm.go:319] [bootstrap-token] Using token: 2xpoj0.3iafwcplk6gzssxo
	I1120 20:21:45.975478    8315 out.go:252]   - Configuring RBAC rules ...
	I1120 20:21:45.975637    8315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:21:45.975749    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:21:45.975873    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:21:45.975991    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:21:45.976087    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:21:45.976159    8315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:21:45.976260    8315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:21:45.976297    8315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:21:45.976339    8315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:21:45.976345    8315 kubeadm.go:319] 
	I1120 20:21:45.976416    8315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:21:45.976432    8315 kubeadm.go:319] 
	I1120 20:21:45.976492    8315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:21:45.976498    8315 kubeadm.go:319] 
	I1120 20:21:45.976524    8315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:21:45.976573    8315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:21:45.976612    8315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:21:45.976618    8315 kubeadm.go:319] 
	I1120 20:21:45.976662    8315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:21:45.976669    8315 kubeadm.go:319] 
	I1120 20:21:45.976708    8315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:21:45.976716    8315 kubeadm.go:319] 
	I1120 20:21:45.976761    8315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:21:45.976832    8315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:21:45.976903    8315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:21:45.976909    8315 kubeadm.go:319] 
	I1120 20:21:45.976975    8315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:21:45.977039    8315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:21:45.977046    8315 kubeadm.go:319] 
	I1120 20:21:45.977115    8315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977197    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 \
	I1120 20:21:45.977222    8315 kubeadm.go:319] 	--control-plane 
	I1120 20:21:45.977228    8315 kubeadm.go:319] 
	I1120 20:21:45.977318    8315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:21:45.977332    8315 kubeadm.go:319] 
	I1120 20:21:45.977426    8315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977559    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 
	I1120 20:21:45.977570    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:45.977577    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:45.978905    8315 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1120 20:21:45.980206    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1120 20:21:45.998278    8315 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1120 20:21:46.024557    8315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:21:46.024640    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.024705    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-947553 minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-947553 minikube.k8s.io/primary=true
	I1120 20:21:46.163608    8315 ops.go:34] apiserver oom_adj: -16
	I1120 20:21:46.163786    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.664084    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.164553    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.664473    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.164635    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.664221    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.163942    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.663901    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.164591    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.290234    8315 kubeadm.go:1114] duration metric: took 4.265649758s to wait for elevateKubeSystemPrivileges
	I1120 20:21:50.290282    8315 kubeadm.go:403] duration metric: took 16.650648707s to StartCluster
	I1120 20:21:50.290306    8315 settings.go:142] acquiring lock: {Name:mke92973c8f33ef32fe11f7b266adf74cd3ec47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.290453    8315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:50.290990    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.291268    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:21:50.291283    8315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:50.291344    8315 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 20:21:50.291469    8315 addons.go:70] Setting gcp-auth=true in profile "addons-947553"
	I1120 20:21:50.291484    8315 addons.go:70] Setting ingress=true in profile "addons-947553"
	I1120 20:21:50.291498    8315 mustload.go:66] Loading cluster: addons-947553
	I1120 20:21:50.291500    8315 addons.go:239] Setting addon ingress=true in "addons-947553"
	I1120 20:21:50.291494    8315 addons.go:70] Setting cloud-spanner=true in profile "addons-947553"
	I1120 20:21:50.291519    8315 addons.go:239] Setting addon cloud-spanner=true in "addons-947553"
	I1120 20:21:50.291525    8315 addons.go:70] Setting registry=true in profile "addons-947553"
	I1120 20:21:50.291542    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291555    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291554    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291565    8315 addons.go:239] Setting addon registry=true in "addons-947553"
	I1120 20:21:50.291594    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291595    8315 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.291607    8315 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-947553"
	I1120 20:21:50.291627    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291692    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291474    8315 addons.go:70] Setting yakd=true in profile "addons-947553"
	I1120 20:21:50.292160    8315 addons.go:239] Setting addon yakd=true in "addons-947553"
	I1120 20:21:50.292192    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292250    8315 addons.go:70] Setting inspektor-gadget=true in profile "addons-947553"
	I1120 20:21:50.292272    8315 addons.go:239] Setting addon inspektor-gadget=true in "addons-947553"
	I1120 20:21:50.292297    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292485    8315 addons.go:70] Setting ingress-dns=true in profile "addons-947553"
	I1120 20:21:50.292520    8315 addons.go:239] Setting addon ingress-dns=true in "addons-947553"
	I1120 20:21:50.292545    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292621    8315 addons.go:70] Setting registry-creds=true in profile "addons-947553"
	I1120 20:21:50.292644    8315 addons.go:239] Setting addon registry-creds=true in "addons-947553"
	I1120 20:21:50.292671    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292677    8315 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-947553"
	I1120 20:21:50.292719    8315 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:21:50.292755    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292807    8315 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-947553"
	I1120 20:21:50.292829    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-947553"
	I1120 20:21:50.292880    8315 addons.go:70] Setting metrics-server=true in profile "addons-947553"
	I1120 20:21:50.292897    8315 addons.go:239] Setting addon metrics-server=true in "addons-947553"
	I1120 20:21:50.292922    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293069    8315 out.go:179] * Verifying Kubernetes components...
	I1120 20:21:50.293281    8315 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.293300    8315 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-947553"
	I1120 20:21:50.293321    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293536    8315 addons.go:70] Setting default-storageclass=true in profile "addons-947553"
	I1120 20:21:50.293556    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-947553"
	I1120 20:21:50.293573    8315 addons.go:70] Setting storage-provisioner=true in profile "addons-947553"
	I1120 20:21:50.293591    8315 addons.go:239] Setting addon storage-provisioner=true in "addons-947553"
	I1120 20:21:50.293613    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293979    8315 addons.go:70] Setting volcano=true in profile "addons-947553"
	I1120 20:21:50.294002    8315 addons.go:239] Setting addon volcano=true in "addons-947553"
	I1120 20:21:50.294026    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294103    8315 addons.go:70] Setting volumesnapshots=true in profile "addons-947553"
	I1120 20:21:50.294122    8315 addons.go:239] Setting addon volumesnapshots=true in "addons-947553"
	I1120 20:21:50.294146    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294465    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:50.297973    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.299952    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 20:21:50.299964    8315 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 20:21:50.300060    8315 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 20:21:50.300093    8315 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 20:21:50.299977    8315 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 20:21:50.301985    8315 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-947553"
	I1120 20:21:50.302030    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.302603    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 20:21:50.303185    8315 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 20:21:50.302631    8315 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:50.303261    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	W1120 20:21:50.302916    8315 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 20:21:50.303040    8315 addons.go:239] Setting addon default-storageclass=true in "addons-947553"
	I1120 20:21:50.303355    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.303953    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 20:21:50.303969    8315 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 20:21:50.303973    8315 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 20:21:50.303953    8315 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 20:21:50.304024    8315 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:50.305543    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 20:21:50.304040    8315 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 20:21:50.304099    8315 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 20:21:50.305800    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 20:21:50.304918    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.304913    8315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:21:50.305899    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:50.307319    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 20:21:50.306014    8315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:50.307351    8315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:21:50.307429    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.307470    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 20:21:50.307480    8315 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1120 20:21:50.306784    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:21:50.307511    8315 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:21:50.306817    8315 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 20:21:50.307620    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.306822    8315 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:50.307695    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 20:21:50.307706    8315 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:50.307716    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 20:21:50.306909    8315 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:50.308092    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 20:21:50.308474    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 20:21:50.308512    8315 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 20:21:50.308524    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 20:21:50.308827    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.308882    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309172    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.309208    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309325    8315 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 20:21:50.309319    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.309343    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:50.309353    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:21:50.309929    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.310172    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.311742    8315 out.go:179]   - Using image docker.io/busybox:stable
	I1120 20:21:50.311746    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 20:21:50.311894    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:50.311914    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 20:21:50.313106    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:50.313128    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 20:21:50.314097    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.314587    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 20:21:50.315478    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.315516    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.316257    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.316610    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 20:21:50.317131    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.317791    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318124    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318489    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.318521    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318877    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.319057    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319200    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319245    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 20:21:50.319767    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319780    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319803    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319808    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320039    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320130    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320260    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320721    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.320726    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321176    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321210    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321308    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321337    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321371    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321267    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321416    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321437    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321401    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321692    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321834    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 20:21:50.321903    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321928    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321951    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322097    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322416    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322441    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322690    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322712    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.322755    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323004    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323171    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.323197    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323359    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324196    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.324226    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324375    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.324536    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 20:21:50.325593    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 20:21:50.325607    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 20:21:50.328078    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328534    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.328557    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328735    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	W1120 20:21:50.476524    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.476558    8315 retry.go:31] will retry after 236.913044ms: ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513415    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513438    8315 retry.go:31] will retry after 367.013463ms: ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513646    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513672    8315 retry.go:31] will retry after 332.960576ms: ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.932554    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:50.932720    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:21:51.133049    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:51.144339    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:51.194458    8315 node_ready.go:35] waiting up to 6m0s for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206010    8315 node_ready.go:49] node "addons-947553" is "Ready"
	I1120 20:21:51.206043    8315 node_ready.go:38] duration metric: took 11.547378ms for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206057    8315 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:21:51.206112    8315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:21:51.317342    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:51.364561    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:51.396520    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 20:21:51.396550    8315 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 20:21:51.401286    8315 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 20:21:51.401312    8315 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 20:21:51.407832    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:51.408939    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:51.438765    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:51.452371    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:51.487541    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:21:51.487567    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 20:21:51.667634    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:51.705278    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 20:21:51.705307    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 20:21:52.073299    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 20:21:52.073332    8315 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 20:21:52.156840    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 20:21:52.156890    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 20:21:52.182216    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 20:21:52.182260    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 20:21:52.289345    8315 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.289373    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 20:21:52.358156    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:21:52.358186    8315 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:21:52.524224    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:52.790466    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 20:21:52.790495    8315 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 20:21:52.867899    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 20:21:52.867926    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 20:21:52.911549    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.970452    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 20:21:52.970488    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 20:21:53.004660    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.004687    8315 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:21:53.165475    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.165505    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 20:21:53.292981    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 20:21:53.293014    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 20:21:53.388236    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 20:21:53.388266    8315 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 20:21:53.476188    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.678912    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.790164    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 20:21:53.790192    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 20:21:53.898000    8315 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:53.898021    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 20:21:54.089534    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 20:21:54.089570    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 20:21:54.326111    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:54.418621    8315 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.485861131s)
	I1120 20:21:54.418657    8315 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1120 20:21:54.662053    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 20:21:54.662081    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 20:21:54.924608    8315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-947553" context rescaled to 1 replicas
	I1120 20:21:55.256603    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 20:21:55.256640    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 20:21:55.513213    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.380124251s)
	I1120 20:21:55.513226    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.368859446s)
	I1120 20:21:55.513320    8315 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.307185785s)
	I1120 20:21:55.513363    8315 api_server.go:72] duration metric: took 5.222046626s to wait for apiserver process to appear ...
	I1120 20:21:55.513378    8315 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:21:55.513400    8315 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I1120 20:21:55.523525    8315 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I1120 20:21:55.528356    8315 api_server.go:141] control plane version: v1.34.1
	I1120 20:21:55.528379    8315 api_server.go:131] duration metric: took 14.994765ms to wait for apiserver health ...
	I1120 20:21:55.528386    8315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:21:55.548383    8315 system_pods.go:59] 10 kube-system pods found
	I1120 20:21:55.548433    8315 system_pods.go:61] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.548445    8315 system_pods.go:61] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548456    8315 system_pods.go:61] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548466    8315 system_pods.go:61] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.548475    8315 system_pods.go:61] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.548481    8315 system_pods.go:61] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.548491    8315 system_pods.go:61] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.548496    8315 system_pods.go:61] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.548506    8315 system_pods.go:61] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.548517    8315 system_pods.go:61] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.548528    8315 system_pods.go:74] duration metric: took 20.135717ms to wait for pod list to return data ...
	I1120 20:21:55.548544    8315 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:21:55.562077    8315 default_sa.go:45] found service account: "default"
	I1120 20:21:55.562106    8315 default_sa.go:55] duration metric: took 13.552829ms for default service account to be created ...
	I1120 20:21:55.562116    8315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:21:55.573516    8315 system_pods.go:86] 10 kube-system pods found
	I1120 20:21:55.573548    8315 system_pods.go:89] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.573556    8315 system_pods.go:89] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573563    8315 system_pods.go:89] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573568    8315 system_pods.go:89] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.573572    8315 system_pods.go:89] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.573584    8315 system_pods.go:89] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.573588    8315 system_pods.go:89] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.573591    8315 system_pods.go:89] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.573595    8315 system_pods.go:89] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.573610    8315 system_pods.go:89] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.573619    8315 system_pods.go:126] duration metric: took 11.497162ms to wait for k8s-apps to be running ...
	I1120 20:21:55.573629    8315 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:21:55.573680    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:21:55.821435    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 20:21:55.821456    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 20:21:56.372153    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 20:21:56.372176    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 20:21:57.167628    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.167657    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 20:21:57.654485    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.724650    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 20:21:57.727763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728228    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:57.728257    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728455    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:57.738040    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.420656069s)
	I1120 20:21:57.738102    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.373508925s)
	I1120 20:21:58.308598    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 20:21:58.564754    8315 addons.go:239] Setting addon gcp-auth=true in "addons-947553"
	I1120 20:21:58.564806    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:58.566499    8315 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 20:21:58.568681    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569089    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:58.569115    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569249    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:58.833314    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.424339116s)
	I1120 20:21:58.833336    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.425455784s)
	I1120 20:21:58.833402    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.394606542s)
	I1120 20:22:00.317183    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.864775691s)
	I1120 20:22:00.317236    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.649563834s)
	I1120 20:22:00.317246    8315 addons.go:480] Verifying addon ingress=true in "addons-947553"
	I1120 20:22:00.317313    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.793066584s)
	I1120 20:22:00.317374    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.405778801s)
	I1120 20:22:00.317401    8315 addons.go:480] Verifying addon registry=true in "addons-947553"
	I1120 20:22:00.317473    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.841250467s)
	I1120 20:22:00.317500    8315 addons.go:480] Verifying addon metrics-server=true in "addons-947553"
	I1120 20:22:00.317549    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.638598976s)
	I1120 20:22:00.318753    8315 out.go:179] * Verifying ingress addon...
	I1120 20:22:00.319477    8315 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-947553 service yakd-dashboard -n yakd-dashboard
	
	I1120 20:22:00.319499    8315 out.go:179] * Verifying registry addon...
	I1120 20:22:00.321062    8315 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 20:22:00.321882    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 20:22:00.330255    8315 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:22:00.330274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:00.330580    8315 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:22:00.330602    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.843037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.862027    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.136755    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.810594192s)
	I1120 20:22:01.136799    8315 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.563097568s)
	W1120 20:22:01.136810    8315 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136824    8315 system_svc.go:56] duration metric: took 5.563190734s WaitForService to wait for kubelet
	I1120 20:22:01.136838    8315 retry.go:31] will retry after 297.745206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136835    8315 kubeadm.go:587] duration metric: took 10.845518493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:22:01.136866    8315 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:22:01.169336    8315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1120 20:22:01.169377    8315 node_conditions.go:123] node cpu capacity is 2
	I1120 20:22:01.169391    8315 node_conditions.go:105] duration metric: took 32.519256ms to run NodePressure ...
	I1120 20:22:01.169403    8315 start.go:242] waiting for startup goroutines ...
	I1120 20:22:01.357701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:01.358795    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.434928    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:22:01.868679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.868782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.346294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.352833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.862753    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.890512    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.996195    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.34165692s)
	I1120 20:22:02.996225    8315 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.429699726s)
	I1120 20:22:02.996254    8315 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:22:02.997930    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:22:02.997950    8315 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 20:22:02.999363    8315 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 20:22:02.999980    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 20:22:03.000816    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 20:22:03.000833    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 20:22:03.047631    8315 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:22:03.047661    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.095774    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 20:22:03.095800    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 20:22:03.172675    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.172696    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 20:22:03.258447    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.328725    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.328999    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:03.506980    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.835051    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.838342    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.009598    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.059484    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.624514335s)
	I1120 20:22:04.342509    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.346146    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:04.552392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.655990    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.397510493s)
	I1120 20:22:04.657251    8315 addons.go:480] Verifying addon gcp-auth=true in "addons-947553"
	I1120 20:22:04.658765    8315 out.go:179] * Verifying gcp-auth addon...
	I1120 20:22:04.660962    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 20:22:04.689345    8315 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 20:22:04.689379    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:04.830184    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.831805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.008119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.171353    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.336728    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.336869    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.517754    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.671439    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.828977    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.832656    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.008324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.167007    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:06.327339    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.505702    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.665077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.831323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.832004    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.005311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.170575    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.326420    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.330401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:07.504324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.665313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.827482    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.830140    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.005717    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.168657    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.325483    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.326808    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:08.508047    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.664546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.828313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.829419    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.004761    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.165417    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.325923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.327133    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.503806    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.665158    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.827304    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.828458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.005165    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.164419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.328020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.328899    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.503540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.665211    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.827565    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.828293    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.007088    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.172637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.329792    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.330515    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:11.506127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.666152    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.832352    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.832833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.009397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.164503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.324601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:12.330001    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.557333    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.690799    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.826246    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.827168    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.004570    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.166124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.330939    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.334724    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.505747    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.664947    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.826640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.827501    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.005488    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.172285    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.325676    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.327874    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:14.505478    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.665377    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.828164    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.828324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.004108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.165356    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.332218    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.345244    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.505401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.665824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.827117    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.827311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.006364    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.177517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.340592    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.341189    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:16.504797    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.664830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.830245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.830443    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.005532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.167264    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.330014    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.331394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:17.559675    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.678477    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.826495    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.832794    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.005502    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.166351    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.327573    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.327734    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:18.503894    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.666269    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.830279    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.832316    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.005728    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.166452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.327371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.329317    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.506362    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.670606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.831060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.832764    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.004618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.166635    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.327601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.327638    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.504392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.665742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.827471    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.829616    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.004605    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.169921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.333272    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.336011    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.504542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.665682    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.825419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.828055    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.004227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.164229    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.326927    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.332370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:22.505033    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.666978    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.834204    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.836963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.168623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.430297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.433691    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.508735    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.667674    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.836886    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.837245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.005900    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.169110    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.326634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.327904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.673297    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.673506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.830570    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.831631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.009064    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.164922    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.325762    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.327935    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.667618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.827414    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.828623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.005073    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.167711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.326679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.327247    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.505503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.665655    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.825436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.828500    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.005840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.167830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.328527    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.328746    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.506666    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.666716    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.832531    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.833632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.006766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.165323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.327708    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.328341    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.506036    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.666241    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.944433    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.944810    8315 kapi.go:107] duration metric: took 28.622926025s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 20:22:29.006863    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.167687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.328145    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:29.504218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.664460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.827372    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.004445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.164822    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.324811    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.504410    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.665044    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.825337    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.004318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.164385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.325406    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.505029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.665134    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.825650    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.004127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.166139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.324701    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.504614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.664944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.825143    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.004577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.165685    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.325974    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.704460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.708873    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.825075    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.004596    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.165867    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.325611    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.504800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.665454    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.825871    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.004177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.164697    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.326110    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.503481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.664737    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.826308    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.004218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.165000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.324326    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.503689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.666782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.005202    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.164053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.325572    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.505330    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.664284    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.825262    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.004289    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.164481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.326051    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.503226    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.664232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.824502    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.004487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.164963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.325878    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.505209    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.664636    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.825100    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.003777    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.165642    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.504393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.664821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.824897    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.004355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.164546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.326024    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.504280    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.664217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.825780    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.005113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.164701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.325297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.504448    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.665577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.824743    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.004833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.165891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.326070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.503696    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.664800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.826756    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.005306    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.164704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.325455    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.505302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.664815    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.824692    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.003742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.164950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.325614    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.664827    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.826405    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.003951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.165370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.325730    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.505387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.664689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.825033    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.004484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.165449    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.325798    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.504952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.665632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.825364    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.003790    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.165543    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.324818    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.504519    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.664630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.003721    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.164517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.326505    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.504416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.664711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.825942    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.004200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.164578    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.325328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.503484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.665421    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.004287    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.164268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.325315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.504380    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.665173    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.825228    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.004294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.165271    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.325922    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.504540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.664739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.825458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.165838    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.325362    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.503610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.664870    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.827535    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.004328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.164077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.324281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.504388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.665303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.825120    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.004586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.164561    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.325150    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.504219    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.664405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.826068    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.004103    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.164821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.325311    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.504506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.664957    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.825313    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.004010    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.164442    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.325029    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.504374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.825231    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.005792    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.325160    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.504029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.663903    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.825149    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.005092    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.164148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.324606    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.506476    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.664372    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.825198    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.005082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.164250    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.326383    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.503808    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.665909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.825874    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.004396    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.164829    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.326451    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.504153    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.664393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.825331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.004168    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.165403    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.325338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.504355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.664961    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.826305    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.003577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.165374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.325222    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.503643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.665037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.824710    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.004671    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.166844    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.325995    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.503907    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.665203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.825349    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.003990    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.163740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.325833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.504450    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.665053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.824804    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.005371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.164513    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.324904    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.504771    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.665389    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.825137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.003665    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.165006    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.325121    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.504075    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.665109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.824752    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.005627    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.165094    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.325074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.504510    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.665363    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.004201    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.165446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.328697    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.504259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.664453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.005404    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.164687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.325987    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.504122    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.664875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.826159    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.003419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.164744    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.325475    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.504220    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.004170    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.164955    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.325525    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.503631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.665991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.825430    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.003813    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.165098    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.325081    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.505315    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.665028    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.824542    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.005048    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.164487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.325020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.505722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.665177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.824929    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.004788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.165203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.324423    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.504085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.664347    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.825592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.007081    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.164221    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.664640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.825185    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.004104    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.165054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.325282    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.503452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.665265    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.824735    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.004695    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.164715    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.325314    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.503892    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.666272    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.824679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.004416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.164791    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.326105    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.504065    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.664586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.825391    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.004785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.164970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.325404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.503939    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.665093    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.824880    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.004871    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.165473    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.325426    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.505660    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.664949    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.825911    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.006475    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.164603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.504419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.664842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.825338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.003647    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.165240    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.326436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.506070    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.664446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.824867    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.005086    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.163951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.325452    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.504677    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.665161    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.826375    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.004842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.164847    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.325155    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.504019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.665239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.824773    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.005740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.165126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.324566    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.504021    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.665217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.825011    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.003550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.165239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.325538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.503904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.664722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.825083    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.004187    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.166259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.504236    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.664582    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.825165    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.003447    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.164432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.504121    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.664009    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.825082    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.004052    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.165479    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.328054    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.504976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.667464    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.824784    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.004256    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.166254    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.329074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.668785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.834378    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.012921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.182382    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.328273    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.512432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.668839    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.828146    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.010373    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.171918    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.327438    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.508687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.668358    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.825953    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.005514    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.169126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.328834    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.508779    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.665012    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.828137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.004394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.166928    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.325934    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.505139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.664302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.826453    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.009232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.164433    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.326221    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.503774    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.668019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.828315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.003923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.171231    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.329115    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.504101    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.665063    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.827549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.008085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.165142    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.325522    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.504378    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.664419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.826131    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.003818    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.169232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.324564    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.504485    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.668374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.828255    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.006466    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.166014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.327358    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.510974    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.670391    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.826816    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.005686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.164891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.328274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.503673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.665805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.825384    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.007673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.164828    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.329991    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.507109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.666970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.827404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.006050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.165530    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.336903    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.508108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.665050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.828179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.004826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.168465    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.327802    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.588926    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.686035    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.836096    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.013912    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.170060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.330109    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.506461    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.666266    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.833355    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.012759    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.165788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.331536    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.544743    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.668681    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.826281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.004579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.164501    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.325301    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.510314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.664541    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.825733    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.005390    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.164631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.325040    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.503952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.666328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.824449    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.004387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.165135    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.504929    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.665257    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.825179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.004248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.164504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.326488    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.504139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.665131    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.825464    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.004233    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.324723    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.505340    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.665910    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.824647    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.004550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.324772    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.504303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.667291    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.825223    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.004148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.164388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.325070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.503625    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.665901    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.826412    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.003441    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.164614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.325319    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.505054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.665324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.825610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.004621    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.165405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.326233    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.503470    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.665016    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.825575    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.004511    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.165472    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.325694    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.504017    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.663700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.825810    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.004323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.165204    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.504535    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.664639    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.825026    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.003739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.165764    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.325045    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.503360    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.664840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.826605    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.003999    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.165275    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.325421    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.504637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.665014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.824766    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.005128    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.164263    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.325333    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.504062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.664931    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.826290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.004640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.164832    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.325901    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.505129    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.664227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.824719    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.004950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.165053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.325360    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.505959    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.664868    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.826277    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.004096    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.164445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.324757    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.505252    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.665119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.824454    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.004909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.165591    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.325118    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.507564    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.664700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.826799    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.005349    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.165155    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.324582    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.504443    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.665778    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.825741    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.004414    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.164474    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.326066    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.503776    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.664979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.826056    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.003318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.164124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.324310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.503413    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.664606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.824831    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.004542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.165571    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.325290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.503944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.666366    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.825256    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.003826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.165200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.324763    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.505835    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.665113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.824632    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.004172    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.164462    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.324992    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.503686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.664930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.825754    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.004000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.163782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.325549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.504780    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.665314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.825684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.004180    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.164082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.324141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.504612    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.664748    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.825910    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.004630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.325684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.504463    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.664189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.824224    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.004212    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.165015    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.324331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.507504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.664678    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.826028    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.004824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.165312    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.325310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.503525    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.664637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.825538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.005397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.165397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.324350    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.504613    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.665640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.825950    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.004189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.167663    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.326720    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.508041    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.665546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.828365    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.004058    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.165184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.325634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.504817    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.668489    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.828972    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.005704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.167268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.334698    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.507751    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.667328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.831249    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.005669    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.167145    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.328610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.504643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.666213    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.830891    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.006991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.167023    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.326125    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.512788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.665384    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.829776    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.003972    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.170397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.324898    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.505825    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.665603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.827634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.007579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.168453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.327180    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.503837    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.665184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.824592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.005482    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.164766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.330141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.504539    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.667427    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.835328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.139729    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.240898    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.326048    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.505595    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.670610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.827986    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.007659    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.164981    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.331893    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.505078    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.665057    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.824303    8315 kapi.go:107] duration metric: took 2m26.503242857s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 20:24:27.004029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.164962    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:27.504834    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.668267    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.007248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.166983    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.507055    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.666163    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.005997    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.328979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.505976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.669956    8315 kapi.go:107] duration metric: took 2m25.008991629s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 20:24:29.672108    8315 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-947553 cluster.
	I1120 20:24:29.673437    8315 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 20:24:29.674752    8315 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 20:24:30.011875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:30.506718    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.005946    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.508062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.004768    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.513385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.006643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.504200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:34.004984    8315 kapi.go:107] duration metric: took 2m31.004999967s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 20:24:34.006745    8315 out.go:179] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1120 20:24:34.007905    8315 addons.go:515] duration metric: took 2m43.716565511s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1120 20:24:34.007942    8315 start.go:247] waiting for cluster config update ...
	I1120 20:24:34.007968    8315 start.go:256] writing updated cluster config ...
	I1120 20:24:34.008267    8315 ssh_runner.go:195] Run: rm -f paused
	I1120 20:24:34.016789    8315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:34.020696    8315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.026522    8315 pod_ready.go:94] pod "coredns-66bc5c9577-tpfkd" is "Ready"
	I1120 20:24:34.026545    8315 pod_ready.go:86] duration metric: took 5.821939ms for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.029616    8315 pod_ready.go:83] waiting for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.035420    8315 pod_ready.go:94] pod "etcd-addons-947553" is "Ready"
	I1120 20:24:34.035447    8315 pod_ready.go:86] duration metric: took 5.807107ms for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.038012    8315 pod_ready.go:83] waiting for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.042359    8315 pod_ready.go:94] pod "kube-apiserver-addons-947553" is "Ready"
	I1120 20:24:34.042389    8315 pod_ready.go:86] duration metric: took 4.353428ms for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.045156    8315 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.421067    8315 pod_ready.go:94] pod "kube-controller-manager-addons-947553" is "Ready"
	I1120 20:24:34.421095    8315 pod_ready.go:86] duration metric: took 375.9154ms for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.622667    8315 pod_ready.go:83] waiting for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.021658    8315 pod_ready.go:94] pod "kube-proxy-92nmr" is "Ready"
	I1120 20:24:35.021685    8315 pod_ready.go:86] duration metric: took 398.990446ms for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.222270    8315 pod_ready.go:83] waiting for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621176    8315 pod_ready.go:94] pod "kube-scheduler-addons-947553" is "Ready"
	I1120 20:24:35.621208    8315 pod_ready.go:86] duration metric: took 398.900241ms for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621225    8315 pod_ready.go:40] duration metric: took 1.604402122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:35.668514    8315 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:24:35.670410    8315 out.go:179] * Done! kubectl is now configured to use "addons-947553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.299707308Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad1cd90f-21d8-40d7-97d5-3c58aad7a468 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.301643770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03a3cd5f-38a7-47b2-8780-aaee7b8dfc9f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.302814060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670849302786834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03a3cd5f-38a7-47b2-8780-aaee7b8dfc9f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.303795889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0598d28e-81e0-4902-ba3e-4a7d74771989 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.304133393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0598d28e-81e0-4902-ba3e-4a7d74771989 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.304894906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4
cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be
470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf
2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b19
0596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8
443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c
65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0598d28e-81e0-4902-ba3e-4a7d74771989 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.335183811Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4121ef3f-5ba6-41f1-a6ea-26f196c7ff3d name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.336424873Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f7f2d118e6523352bd7c9b61e15ab7337bd3d7de701870c2971e468e62ffd547,Metadata:&PodSandboxMetadata{Name:nginx,Uid:261f896c-810b-4000-a18d-13ad1a4b0967,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670368473137047,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 261f896c-810b-4000-a18d-13ad1a4b0967,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:26:08.137374063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73140ecc976f2b92d584656177ff4f5cfceaf1052738a521b5da703f34d4297a,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:3fabe4f4-d0a9-40fe-a635-e27af546a8ce,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670329608089551,Labels:map[string]string{app: task-pv
-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3fabe4f4-d0a9-40fe-a635-e27af546a8ce,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:25:29.282372021Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&PodSandboxMetadata{Name:busybox,Uid:709b0bdb-dd50-4d23-b6f1-1f659e2347cf,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670276578436563,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:24:36.259942574Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&PodSandboxMetadata{
Name:ingress-nginx-controller-6c8bf45fb-6hpj8,Uid:b8dafe03-8e55-485a-ace3-f516c9950d0d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670256340718077,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,pod-template-hash: 6c8bf45fb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:22:00.096136750Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ad582478-b86f-4230-9f35-836dfdfac5de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670119700759150,Labels:map[string]string{addonmanager.kubernet
es.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-20T20:
21:57.672822822Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:3988d2f6-2df1-49e8-8aa5-cf6529799ce0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670118391713722,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\"
,\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-11-20T20:21:57.371251279Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-sl95v,Uid:bfbe4372-28d1-4dc0-ace1-e7096a3042ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670114407201946,Labels:map[string]strin
g{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:21:54.076006076Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-tpfkd,Uid:0665c9f9-0189-46cb-bc59-193f9f333001,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670111246126866,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/co
nfig.seen: 2025-11-20T20:21:50.877372262Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&PodSandboxMetadata{Name:kube-proxy-92nmr,Uid:7ff384ea-1b7c-49c7-941c-86933f1f9b0a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670110891857984,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:21:50.545483599Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-947553,Uid:c1563a6fc8f372e84c559079393d0798,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1
763670099665629300,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.80:8443,kubernetes.io/config.hash: c1563a6fc8f372e84c559079393d0798,kubernetes.io/config.seen: 2025-11-20T20:21:38.837174985Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-947553,Uid:eaf9c8220305171251451e6ff3491ef0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670099661848044,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: eaf9c8220305171251451e6ff3491ef0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eaf9c8220305171251451e6ff3491ef0,kubernetes.io/config.seen: 2025-11-20T20:21:38.837177605Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-947553,Uid:2c3e11beb64217e1f3209d29f540719d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670099651428754,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2c3e11beb64217e1f3209d29f540719d,kubernetes.io/config.seen: 2025-11-20T20:21:38.837176628Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&PodSandboxMetadata{Name:etcd-addons-947553,Uid:100ae3428c2e35d8e1cf2deaa80d6526,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763670099638290189,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.80:2379,kubernetes.io/config.hash: 100ae3428c2e35d8e1cf2deaa80d6526,kubernetes.io/config.seen: 2025-11-20T20:21:38.837149944Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4121ef3f-5ba6-41f1-a6ea-26f196c7ff3d name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.338055055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa752c51-269b-4b18-891d-fef3ea072621 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.338817503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa752c51-269b-4b18-891d-fef3ea072621 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.339714261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514e
efc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2fef
f972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":
2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa752c51-269b-4b18-891d-fef3ea072621 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.342330799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c065e22a-1e2e-47d9-b836-7c987fb0ca41 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.342387914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c065e22a-1e2e-47d9-b836-7c987fb0ca41 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.344733116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d4596bc-1355-4970-96d8-18a212019488 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.345914460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670849345889653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d4596bc-1355-4970-96d8-18a212019488 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.351079081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4a86641-02a3-48ad-9756-e49c3321cd00 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.351148350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4a86641-02a3-48ad-9756-e49c3321cd00 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.352014826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4
cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be
470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf
2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b19
0596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8
443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c
65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4a86641-02a3-48ad-9756-e49c3321cd00 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.388984658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e193abfa-4365-45ff-a559-8f7330f033bb name=/runtime.v1.RuntimeService/Version
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.389070227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e193abfa-4365-45ff-a559-8f7330f033bb name=/runtime.v1.RuntimeService/Version
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.390334854Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7b4e8d3-cad3-4233-8d3b-d3381be8e04a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.391471940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670849391443366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7b4e8d3-cad3-4233-8d3b-d3381be8e04a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.392845455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa1832a6-2cf2-41f6-a1c4-ba0e493a2c0c name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.392988026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa1832a6-2cf2-41f6-a1c4-ba0e493a2c0c name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:34:09 addons-947553 crio[815]: time="2025-11-20 20:34:09.393328322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c9950d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4
cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be
470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf
2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b19
0596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8
443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c
65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa1832a6-2cf2-41f6-a1c4-ba0e493a2c0c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	83c7cffc192d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          9 minutes ago       Running             busybox                   0                   30b4f748049f4       busybox                                    default
	d3d8b65697554       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             9 minutes ago       Running             controller                0                   0a1212c05ea88       ingress-nginx-controller-6c8bf45fb-6hpj8   ingress-nginx
	ebdc020b24013       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   10 minutes ago      Exited              patch                     0                   aab95fc7e29c5       ingress-nginx-admission-patch-xqmtg        ingress-nginx
	cf24d40d09d97       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   10 minutes ago      Exited              create                    0                   b81a00087e290       ingress-nginx-admission-create-whk72       ingress-nginx
	3ed48acc4e6b6       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               11 minutes ago      Running             minikube-ingress-dns      0                   e08ae02d97821       kube-ingress-dns-minikube                  kube-system
	1f0a03ae88dd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   7a8aea6b56873       storage-provisioner                        kube-system
	dc04223232fbc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     12 minutes ago      Running             amd-gpu-device-plugin     0                   1c75fb61317d9       amd-gpu-device-plugin-sl95v                kube-system
	44ea167ad7358       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             12 minutes ago      Running             coredns                   0                   1b8aec92deac0       coredns-66bc5c9577-tpfkd                   kube-system
	107772b7cd302       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             12 minutes ago      Running             kube-proxy                0                   44459bb4c1592       kube-proxy-92nmr                           kube-system
	1d2feff972c82       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             12 minutes ago      Running             kube-scheduler            0                   7854300bd65f2       kube-scheduler-addons-947553               kube-system
	3ce144c0d06ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             12 minutes ago      Running             kube-apiserver            0                   c0df804390cc3       kube-apiserver-addons-947553               kube-system
	3f04fbc5a9a9d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             12 minutes ago      Running             kube-controller-manager   0                   c73098b299e79       kube-controller-manager-addons-947553      kube-system
	1b4f51aca4917       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             12 minutes ago      Running             etcd                      0                   959ac70855500       etcd-addons-947553                         kube-system
	
	
	==> coredns [44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86] <==
	[INFO] 10.244.0.8:38281 - 13381 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000419309s
	[INFO] 10.244.0.8:38281 - 4239 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000335145s
	[INFO] 10.244.0.8:38281 - 63093 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000099875s
	[INFO] 10.244.0.8:38281 - 4801 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008321s
	[INFO] 10.244.0.8:38281 - 39674 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000264028s
	[INFO] 10.244.0.8:38281 - 62546 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124048s
	[INFO] 10.244.0.8:38281 - 16805 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000647057s
	[INFO] 10.244.0.8:51997 - 13985 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160466s
	[INFO] 10.244.0.8:51997 - 14298 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000220652s
	[INFO] 10.244.0.8:45076 - 61133 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125223s
	[INFO] 10.244.0.8:45076 - 60865 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000152664s
	[INFO] 10.244.0.8:36522 - 44178 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060404s
	[INFO] 10.244.0.8:36522 - 43995 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078705s
	[INFO] 10.244.0.8:59475 - 4219 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116054s
	[INFO] 10.244.0.8:59475 - 4422 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010261s
	[INFO] 10.244.0.23:44890 - 42394 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000390546s
	[INFO] 10.244.0.23:40413 - 38581 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001287022s
	[INFO] 10.244.0.23:48952 - 288 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001963576s
	[INFO] 10.244.0.23:45971 - 54062 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002169261s
	[INFO] 10.244.0.23:46787 - 19498 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139649s
	[INFO] 10.244.0.23:50609 - 21977 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067547s
	[INFO] 10.244.0.23:44756 - 29378 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.005330443s
	[INFO] 10.244.0.23:59657 - 39385 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005346106s
	[INFO] 10.244.0.27:42107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000463345s
	[INFO] 10.244.0.27:53096 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000254044s
	
	
	==> describe nodes <==
	Name:               addons-947553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-947553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-947553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-947553
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:21:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-947553
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:34:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:32:59 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:32:59 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:32:59 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:32:59 +0000   Thu, 20 Nov 2025 20:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    addons-947553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ab490c5e4f046af88ecdee8117466b4
	  System UUID:                2ab490c5-e4f0-46af-88ec-dee8117466b4
	  Boot ID:                    1ea0245c-4d70-493b-9a36-f639a36dba5f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-6hpj8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 amd-gpu-device-plugin-sl95v                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-tpfkd                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-947553                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-947553                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-947553       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-92nmr                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-947553                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-947553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-947553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-947553 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-947553 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-947553 event: Registered Node addons-947553 in Controller
	
	
	==> dmesg <==
	[  +6.168214] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.651247] kauditd_printk_skb: 17 callbacks suppressed
	[Nov20 20:23] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.679825] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.000053] kauditd_printk_skb: 157 callbacks suppressed
	[  +5.059481] kauditd_printk_skb: 109 callbacks suppressed
	[Nov20 20:24] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.445964] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.477031] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.089818] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:25] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.536974] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.509608] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000043] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:26] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.002720] kauditd_printk_skb: 50 callbacks suppressed
	[ +11.737417] kauditd_printk_skb: 103 callbacks suppressed
	[Nov20 20:27] kauditd_printk_skb: 15 callbacks suppressed
	[Nov20 20:28] kauditd_printk_skb: 21 callbacks suppressed
	[Nov20 20:29] kauditd_printk_skb: 9 callbacks suppressed
	[Nov20 20:30] kauditd_printk_skb: 26 callbacks suppressed
	[ +21.384911] kauditd_printk_skb: 9 callbacks suppressed
	[Nov20 20:31] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45] <==
	{"level":"info","ts":"2025-11-20T20:23:44.571673Z","caller":"traceutil/trace.go:172","msg":"trace[884414279] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1098; }","duration":"111.548598ms","start":"2025-11-20T20:23:44.460117Z","end":"2025-11-20T20:23:44.571666Z","steps":["trace[884414279] 'agreement among raft nodes before linearized reading'  (duration: 111.465445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571061Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.869609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.80\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-11-20T20:23:44.571810Z","caller":"traceutil/trace.go:172","msg":"trace[1446846650] range","detail":"{range_begin:/registry/masterleases/192.168.39.80; range_end:; response_count:1; response_revision:1098; }","duration":"155.64428ms","start":"2025-11-20T20:23:44.416161Z","end":"2025-11-20T20:23:44.571805Z","steps":["trace[1446846650] 'agreement among raft nodes before linearized reading'  (duration: 154.810085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:23:46.528477Z","caller":"traceutil/trace.go:172","msg":"trace[982384876] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"154.809492ms","start":"2025-11-20T20:23:46.373650Z","end":"2025-11-20T20:23:46.528459Z","steps":["trace[982384876] 'process raft request'  (duration: 154.328485ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.123570Z","caller":"traceutil/trace.go:172","msg":"trace[1335763238] linearizableReadLoop","detail":"{readStateIndex:1253; appliedIndex:1253; }","duration":"134.10576ms","start":"2025-11-20T20:24:24.989438Z","end":"2025-11-20T20:24:25.123544Z","steps":["trace[1335763238] 'read index received'  (duration: 134.100119ms)","trace[1335763238] 'applied index is now lower than readState.Index'  (duration: 5.092µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:25.123838Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.381481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-11-20T20:24:25.123864Z","caller":"traceutil/trace.go:172","msg":"trace[1178674559] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1204; }","duration":"134.473479ms","start":"2025-11-20T20:24:24.989384Z","end":"2025-11-20T20:24:25.123857Z","steps":["trace[1178674559] 'agreement among raft nodes before linearized reading'  (duration: 134.302699ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:24:25.124126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.465459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:25.124145Z","caller":"traceutil/trace.go:172","msg":"trace[392254424] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1205; }","duration":"131.486967ms","start":"2025-11-20T20:24:24.992652Z","end":"2025-11-20T20:24:25.124139Z","steps":["trace[392254424] 'agreement among raft nodes before linearized reading'  (duration: 131.453666ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.124311Z","caller":"traceutil/trace.go:172","msg":"trace[1682962710] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"237.606056ms","start":"2025-11-20T20:24:24.886699Z","end":"2025-11-20T20:24:25.124305Z","steps":["trace[1682962710] 'process raft request'  (duration: 237.320378ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.314678Z","caller":"traceutil/trace.go:172","msg":"trace[1797119853] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1279; }","duration":"155.702658ms","start":"2025-11-20T20:24:29.158960Z","end":"2025-11-20T20:24:29.314662Z","steps":["trace[1797119853] 'read index received'  (duration: 155.696769ms)","trace[1797119853] 'applied index is now lower than readState.Index'  (duration: 4.683µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:29.314797Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.822209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:29.314815Z","caller":"traceutil/trace.go:172","msg":"trace[163313341] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1230; }","duration":"155.853309ms","start":"2025-11-20T20:24:29.158956Z","end":"2025-11-20T20:24:29.314809Z","steps":["trace[163313341] 'agreement among raft nodes before linearized reading'  (duration: 155.793828ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.315341Z","caller":"traceutil/trace.go:172","msg":"trace[932727743] transaction","detail":"{read_only:false; response_revision:1231; number_of_response:1; }","duration":"158.601334ms","start":"2025-11-20T20:24:29.156731Z","end":"2025-11-20T20:24:29.315333Z","steps":["trace[932727743] 'process raft request'  (duration: 158.264408ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.860975Z","caller":"traceutil/trace.go:172","msg":"trace[570114600] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"232.699788ms","start":"2025-11-20T20:24:38.628262Z","end":"2025-11-20T20:24:38.860962Z","steps":["trace[570114600] 'process raft request'  (duration: 232.584342ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.862428Z","caller":"traceutil/trace.go:172","msg":"trace[1632150606] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"194.825132ms","start":"2025-11-20T20:24:38.667594Z","end":"2025-11-20T20:24:38.862419Z","steps":["trace[1632150606] 'process raft request'  (duration: 194.764757ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:25:59.796917Z","caller":"traceutil/trace.go:172","msg":"trace[1018787678] transaction","detail":"{read_only:false; response_revision:1587; number_of_response:1; }","duration":"178.519957ms","start":"2025-11-20T20:25:59.618371Z","end":"2025-11-20T20:25:59.796891Z","steps":["trace[1018787678] 'process raft request'  (duration: 178.419059ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:26:07.306954Z","caller":"traceutil/trace.go:172","msg":"trace[1832150044] linearizableReadLoop","detail":"{readStateIndex:1696; appliedIndex:1696; }","duration":"207.161975ms","start":"2025-11-20T20:26:07.099774Z","end":"2025-11-20T20:26:07.306936Z","steps":["trace[1832150044] 'read index received'  (duration: 207.151183ms)","trace[1832150044] 'applied index is now lower than readState.Index'  (duration: 6.599µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:26:07.307088Z","caller":"traceutil/trace.go:172","msg":"trace[519307734] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"362.807072ms","start":"2025-11-20T20:26:06.944270Z","end":"2025-11-20T20:26:07.307077Z","steps":["trace[519307734] 'process raft request'  (duration: 362.695059ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:26:07.307192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.369314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3725"}
	{"level":"info","ts":"2025-11-20T20:26:07.307216Z","caller":"traceutil/trace.go:172","msg":"trace[875135275] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:1621; }","duration":"207.439279ms","start":"2025-11-20T20:26:07.099770Z","end":"2025-11-20T20:26:07.307209Z","steps":["trace[875135275] 'agreement among raft nodes before linearized reading'  (duration: 207.290795ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:26:07.307851Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:26:06.944254Z","time spent":"362.881173ms","remote":"127.0.0.1:35880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3014,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:1620 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:2970 >> failure:<request_range:<key:\"/registry/pods/default/registry-test\" > >"}
	{"level":"info","ts":"2025-11-20T20:31:40.925161Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1775}
	{"level":"info","ts":"2025-11-20T20:31:40.982628Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1775,"took":"56.716602ms","hash":2071741257,"current-db-size-bytes":6230016,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4100096,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-11-20T20:31:40.982675Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2071741257,"revision":1775,"compact-revision":-1}
	
	
	==> kernel <==
	 20:34:09 up 13 min,  0 users,  load average: 0.26, 0.72, 0.77
	Linux addons-947553 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2] <==
	E1120 20:23:34.271232       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	I1120 20:23:34.434058       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1120 20:24:45.470175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50698: use of closed network connection
	E1120 20:24:45.698946       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50724: use of closed network connection
	I1120 20:24:55.153735       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.73.86"}
	I1120 20:25:35.271669       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1120 20:26:07.917022       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1120 20:26:08.188570       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.64.46"}
	E1120 20:29:56.936137       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1120 20:29:56.944298       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1120 20:29:56.956788       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1120 20:31:32.035670       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1120 20:31:32.035837       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1120 20:31:32.075003       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1120 20:31:32.075066       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1120 20:31:32.087810       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1120 20:31:32.087865       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1120 20:31:32.109153       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1120 20:31:32.109209       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1120 20:31:32.138723       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1120 20:31:32.138772       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1120 20:31:33.092222       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1120 20:31:33.139075       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1120 20:31:33.265320       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1120 20:31:42.586160       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be] <==
	E1120 20:32:09.431662       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1120 20:32:09.432631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1120 20:32:19.563584       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:32:34.564110       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:32:43.395911       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1120 20:32:43.397130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1120 20:32:48.492032       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1120 20:32:48.493549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1120 20:32:49.564304       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:32:58.847398       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1120 20:32:58.848661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1120 20:33:04.565569       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:33:19.566126       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:33:33.063115       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1120 20:33:33.064301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1120 20:33:34.382855       1 csi_attacher.go:520] kubernetes.io/csi: Attach timeout after 2m0s [volume=0e240b9d-c64f-11f0-b3a1-2ada7a71e2df; attachment.ID=csi-54183f4f5fa88247d0a1c83f893733ec6225c6b6922654e7a6fde3ccc5fd8c8a]
	E1120 20:33:34.383071       1 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^0e240b9d-c64f-11f0-b3a1-2ada7a71e2df podName: nodeName:}" failed. No retries permitted until 2025-11-20 20:33:34.883027365 +0000 UTC m=+714.794699266 (durationBeforeRetry 500ms). Error: AttachVolume.Attach failed for volume "pvc-0f319206-7bda-4a24-a80d-ac987afb3775" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^0e240b9d-c64f-11f0-b3a1-2ada7a71e2df") from node "addons-947553" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 0e240b9d-c64f-11f0-b3a1-2ada7a71e2df
	E1120 20:33:34.566724       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	I1120 20:33:34.943213       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^0e240b9d-c64f-11f0-b3a1-2ada7a71e2df" nodeName="addons-947553" scheduledPods=["default/task-pv-pod"]
	E1120 20:33:37.007754       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1120 20:33:37.009013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1120 20:33:38.455943       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1120 20:33:38.456974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1120 20:33:49.567868       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:34:04.568570       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf] <==
	I1120 20:21:51.944081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:21:52.047283       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:21:52.059178       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1120 20:21:52.063486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:21:52.317013       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:21:52.317608       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:21:52.319592       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:21:52.353676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:21:52.353988       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:21:52.354004       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:21:52.365989       1 config.go:200] "Starting service config controller"
	I1120 20:21:52.366010       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:21:52.373413       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:21:52.373476       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:21:52.373601       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:21:52.373606       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:21:52.404955       1 config.go:309] "Starting node config controller"
	I1120 20:21:52.405179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:21:52.405460       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:21:52.474183       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:21:52.474283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:21:52.570175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b] <==
	E1120 20:21:42.658146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:42.658289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:42.658479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:42.659065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:21:42.659191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:42.659355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:42.659676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:21:42.660629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:43.501696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:21:43.568808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:21:43.596853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:43.607731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:21:43.612970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:21:43.637766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:21:43.650165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:43.687838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:21:43.786838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:43.825959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:21:43.878175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:43.895745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:43.953162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:21:43.991210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:44.021889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:21:44.053100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 20:21:46.731200       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:33:08 addons-947553 kubelet[1518]: I1120 20:33:08.330457    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:33:10 addons-947553 kubelet[1518]: E1120 20:33:10.330411    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="3fabe4f4-d0a9-40fe-a635-e27af546a8ce"
	Nov 20 20:33:15 addons-947553 kubelet[1518]: E1120 20:33:15.783917    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670795782979989  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:15 addons-947553 kubelet[1518]: E1120 20:33:15.784307    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670795782979989  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:22 addons-947553 kubelet[1518]: E1120 20:33:22.330447    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="3fabe4f4-d0a9-40fe-a635-e27af546a8ce"
	Nov 20 20:33:23 addons-947553 kubelet[1518]: W1120 20:33:23.882641    1518 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Nov 20 20:33:25 addons-947553 kubelet[1518]: E1120 20:33:25.787693    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670805787169185  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:25 addons-947553 kubelet[1518]: E1120 20:33:25.788091    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670805787169185  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:27 addons-947553 kubelet[1518]: E1120 20:33:27.536721    1518 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 20 20:33:27 addons-947553 kubelet[1518]: E1120 20:33:27.536829    1518 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 20 20:33:27 addons-947553 kubelet[1518]: E1120 20:33:27.536956    1518 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(261f896c-810b-4000-a18d-13ad1a4b0967): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:33:27 addons-947553 kubelet[1518]: E1120 20:33:27.536995    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="261f896c-810b-4000-a18d-13ad1a4b0967"
	Nov 20 20:33:35 addons-947553 kubelet[1518]: E1120 20:33:35.331275    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="3fabe4f4-d0a9-40fe-a635-e27af546a8ce"
	Nov 20 20:33:35 addons-947553 kubelet[1518]: E1120 20:33:35.790968    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670815790369407  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:35 addons-947553 kubelet[1518]: E1120 20:33:35.791013    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670815790369407  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:42 addons-947553 kubelet[1518]: E1120 20:33:42.333951    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="261f896c-810b-4000-a18d-13ad1a4b0967"
	Nov 20 20:33:45 addons-947553 kubelet[1518]: E1120 20:33:45.795913    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670825795450041  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:45 addons-947553 kubelet[1518]: E1120 20:33:45.795955    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670825795450041  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:54 addons-947553 kubelet[1518]: E1120 20:33:54.335283    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="261f896c-810b-4000-a18d-13ad1a4b0967"
	Nov 20 20:33:55 addons-947553 kubelet[1518]: E1120 20:33:55.798373    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670835797984822  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:55 addons-947553 kubelet[1518]: E1120 20:33:55.798405    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670835797984822  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:33:59 addons-947553 kubelet[1518]: I1120 20:33:59.331115    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sl95v" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:34:05 addons-947553 kubelet[1518]: E1120 20:34:05.803352    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670845801884914  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:34:05 addons-947553 kubelet[1518]: E1120 20:34:05.803429    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670845801884914  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:34:06 addons-947553 kubelet[1518]: E1120 20:34:06.332472    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="261f896c-810b-4000-a18d-13ad1a4b0967"
	
	
	==> storage-provisioner [1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806] <==
	W1120 20:33:44.618576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:46.622121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:46.629323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:48.633872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:48.639734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:50.643545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:50.651883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:52.654928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:52.660603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:54.663963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:54.669227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:56.672266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:56.678184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:58.681955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:33:58.686724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:00.691215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:00.699064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:02.702375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:02.708835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:04.713596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:04.722268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:06.726854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:06.734253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:08.741071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:08.748656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-947553 -n addons-947553
helpers_test.go:269: (dbg) Run:  kubectl --context addons-947553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg: exit status 1 (83.937452ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:26:08 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8bvn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s8bvn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m2s                   default-scheduler  Successfully assigned default/nginx to addons-947553
	  Warning  Failed     2m36s (x3 over 6m36s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    103s (x4 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     43s (x4 over 6m36s)    kubelet            Error: ErrImagePull
	  Warning  Failed     43s                    kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x8 over 6m36s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x8 over 6m36s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:25:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw89l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mw89l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                   From                     Message
	  ----     ------              ----                  ----                     -------
	  Normal   Scheduled           8m41s                 default-scheduler        Successfully assigned default/task-pv-pod to addons-947553
	  Warning  Failed              3m6s                  kubelet                  Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed              114s (x3 over 7m37s)  kubelet                  Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed              114s (x4 over 7m37s)  kubelet                  Error: ErrImagePull
	  Warning  FailedAttachVolume  36s                   attachdetach-controller  AttachVolume.Attach failed for volume "pvc-0f319206-7bda-4a24-a80d-ac987afb3775" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 0e240b9d-c64f-11f0-b3a1-2ada7a71e2df
	  Normal   BackOff             35s (x10 over 7m36s)  kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  Failed              35s (x10 over 7m36s)  kubelet                  Error: ImagePullBackOff
	  Normal   Pulling             22s (x5 over 8m41s)   kubelet                  Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7w87 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-w7w87:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-whk72" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xqmtg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 addons disable ingress-dns --alsologtostderr -v=1: (1.477401337s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 addons disable ingress --alsologtostderr -v=1: (7.738972383s)
--- FAIL: TestAddons/parallel/Ingress (492.10s)

                                                
                                    
x
+
TestAddons/parallel/CSI (386.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1120 20:25:12.983998    7706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1120 20:25:12.988946    7706 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1120 20:25:12.988973    7706 kapi.go:107] duration metric: took 4.987588ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.999739ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-947553 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-947553 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3fabe4f4-d0a9-40fe-a635-e27af546a8ce] Pending
helpers_test.go:352: "task-pv-pod" [3fabe4f4-d0a9-40fe-a635-e27af546a8ce] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-947553 -n addons-947553
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-11-20 20:31:29.516877273 +0000 UTC m=+638.403166026
addons_test.go:567: (dbg) Run:  kubectl --context addons-947553 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-947553 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-947553/192.168.39.80
Start Time:       Thu, 20 Nov 2025 20:25:29 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw89l (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-mw89l:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to addons-947553
Warning  Failed     2m25s (x2 over 4m56s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    118s (x3 over 6m)      kubelet            Pulling image "docker.io/nginx"
Warning  Failed     25s (x3 over 4m56s)    kubelet            Error: ErrImagePull
Warning  Failed     25s                    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    9s (x3 over 4m55s)     kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     9s (x3 over 4m55s)     kubelet            Error: ImagePullBackOff
addons_test.go:567: (dbg) Run:  kubectl --context addons-947553 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-947553 logs task-pv-pod -n default: exit status 1 (75.952373ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-947553 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-947553 -n addons-947553
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 logs -n 25: (1.201299957s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ -o=json --download-only -p download-only-948147 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ --download-only -p binary-mirror-717684 --alsologtostderr --binary-mirror http://127.0.0.1:46607 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ -p binary-mirror-717684                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ addons  │ disable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ addons  │ enable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ start   │ -p addons-947553 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ enable headlamp -p addons-947553 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ ip      │ addons-947553 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                         │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:28 UTC │ 20 Nov 25 20:28 UTC │
	│ addons  │ addons-947553 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │ 20 Nov 25 20:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:04.799759    8315 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:04.799869    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.799880    8315 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:04.799886    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.800101    8315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:21:04.800589    8315 out.go:368] Setting JSON to false
	I1120 20:21:04.801389    8315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":215,"bootTime":1763669850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:21:04.801502    8315 start.go:143] virtualization: kvm guest
	I1120 20:21:04.803491    8315 out.go:179] * [addons-947553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:21:04.804816    8315 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:21:04.804809    8315 notify.go:221] Checking for updates...
	I1120 20:21:04.807406    8315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:04.808794    8315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:04.810101    8315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:04.811420    8315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:21:04.812487    8315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:21:04.813679    8315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:21:04.845057    8315 out.go:179] * Using the kvm2 driver based on user configuration
	I1120 20:21:04.846216    8315 start.go:309] selected driver: kvm2
	I1120 20:21:04.846231    8315 start.go:930] validating driver "kvm2" against <nil>
	I1120 20:21:04.846241    8315 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:21:04.846961    8315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:21:04.847180    8315 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:21:04.847211    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:04.847249    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:04.847263    8315 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1120 20:21:04.847320    8315 start.go:353] cluster config:
	{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1120 20:21:04.847407    8315 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:21:04.848659    8315 out.go:179] * Starting "addons-947553" primary control-plane node in "addons-947553" cluster
	I1120 20:21:04.849659    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:04.849691    8315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:21:04.849701    8315 cache.go:65] Caching tarball of preloaded images
	I1120 20:21:04.849792    8315 preload.go:238] Found /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:21:04.849803    8315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:21:04.850086    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:04.850110    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json: {Name:mk61841fddacaf75a98d91c699b32f9aeeaf9c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:04.850231    8315 start.go:360] acquireMachinesLock for addons-947553: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 20:21:04.850284    8315 start.go:364] duration metric: took 40.752µs to acquireMachinesLock for "addons-947553"
	I1120 20:21:04.850302    8315 start.go:93] Provisioning new machine with config: &{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:04.850352    8315 start.go:125] createHost starting for "" (driver="kvm2")
	I1120 20:21:04.852328    8315 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1120 20:21:04.852480    8315 start.go:159] libmachine.API.Create for "addons-947553" (driver="kvm2")
	I1120 20:21:04.852506    8315 client.go:173] LocalClient.Create starting
	I1120 20:21:04.852580    8315 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem
	I1120 20:21:05.105122    8315 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem
	I1120 20:21:05.182169    8315 main.go:143] libmachine: creating domain...
	I1120 20:21:05.182188    8315 main.go:143] libmachine: creating network...
	I1120 20:21:05.183682    8315 main.go:143] libmachine: found existing default network
	I1120 20:21:05.183926    8315 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.184462    8315 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d98350}
	I1120 20:21:05.184549    8315 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-947553</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.190086    8315 main.go:143] libmachine: creating private network mk-addons-947553 192.168.39.0/24...
	I1120 20:21:05.255182    8315 main.go:143] libmachine: private network mk-addons-947553 192.168.39.0/24 created
	I1120 20:21:05.255605    8315 main.go:143] libmachine: <network>
	  <name>mk-addons-947553</name>
	  <uuid>aa8efef2-a4fa-46da-99ec-8e728046a9cf</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:9d:8a:68'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.255642    8315 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.255667    8315 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1120 20:21:05.255686    8315 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.255775    8315 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21923-3793/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1120 20:21:05.515325    8315 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa...
	I1120 20:21:05.718020    8315 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk...
	I1120 20:21:05.718065    8315 main.go:143] libmachine: Writing magic tar header
	I1120 20:21:05.718104    8315 main.go:143] libmachine: Writing SSH key tar header
	I1120 20:21:05.718203    8315 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.718284    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553
	I1120 20:21:05.718335    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 (perms=drwx------)
	I1120 20:21:05.718363    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines
	I1120 20:21:05.718383    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines (perms=drwxr-xr-x)
	I1120 20:21:05.718404    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.718421    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube (perms=drwxr-xr-x)
	I1120 20:21:05.718438    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793
	I1120 20:21:05.718456    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793 (perms=drwxrwxr-x)
	I1120 20:21:05.718473    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1120 20:21:05.718490    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1120 20:21:05.718505    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1120 20:21:05.718521    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1120 20:21:05.718536    8315 main.go:143] libmachine: checking permissions on dir: /home
	I1120 20:21:05.718549    8315 main.go:143] libmachine: skipping /home - not owner
	I1120 20:21:05.718557    8315 main.go:143] libmachine: defining domain...
	I1120 20:21:05.719886    8315 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:05.727760    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:79:1f:b5 in network default
	I1120 20:21:05.728410    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:05.728434    8315 main.go:143] libmachine: starting domain...
	I1120 20:21:05.728441    8315 main.go:143] libmachine: ensuring networks are active...
	I1120 20:21:05.729136    8315 main.go:143] libmachine: Ensuring network default is active
	I1120 20:21:05.729504    8315 main.go:143] libmachine: Ensuring network mk-addons-947553 is active
	I1120 20:21:05.730087    8315 main.go:143] libmachine: getting domain XML...
	I1120 20:21:05.731121    8315 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <uuid>2ab490c5-e4f0-46af-88ec-dee8117466b4</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:a7:2c'/>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:79:1f:b5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:07.012614    8315 main.go:143] libmachine: waiting for domain to start...
	I1120 20:21:07.013937    8315 main.go:143] libmachine: domain is now running
	I1120 20:21:07.013958    8315 main.go:143] libmachine: waiting for IP...
	I1120 20:21:07.014713    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.015361    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.015380    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.015661    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.015708    8315 retry.go:31] will retry after 270.684091ms: waiting for domain to come up
	I1120 20:21:07.288186    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.288839    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.288865    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.289198    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.289247    8315 retry.go:31] will retry after 384.258097ms: waiting for domain to come up
	I1120 20:21:07.674731    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.675347    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.675362    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.675602    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.675642    8315 retry.go:31] will retry after 325.268494ms: waiting for domain to come up
	I1120 20:21:08.002089    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.002712    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.002729    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.003011    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.003044    8315 retry.go:31] will retry after 532.953777ms: waiting for domain to come up
	I1120 20:21:08.537708    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.538539    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.538554    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.538839    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.538878    8315 retry.go:31] will retry after 671.32775ms: waiting for domain to come up
	I1120 20:21:09.212032    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.212741    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.212765    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.213102    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.213142    8315 retry.go:31] will retry after 640.716702ms: waiting for domain to come up
	I1120 20:21:09.855420    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.856063    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.856083    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.856391    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.856428    8315 retry.go:31] will retry after 715.495515ms: waiting for domain to come up
	I1120 20:21:10.573053    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:10.573668    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:10.573685    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:10.574006    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:10.574049    8315 retry.go:31] will retry after 1.386473849s: waiting for domain to come up
	I1120 20:21:11.962706    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:11.963438    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:11.963454    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:11.963745    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:11.963779    8315 retry.go:31] will retry after 1.671471747s: waiting for domain to come up
	I1120 20:21:13.637832    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:13.638601    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:13.638620    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:13.639009    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:13.639040    8315 retry.go:31] will retry after 1.524844768s: waiting for domain to come up
	I1120 20:21:15.165792    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:15.166517    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:15.166555    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:15.166908    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:15.166949    8315 retry.go:31] will retry after 2.171556586s: waiting for domain to come up
	I1120 20:21:17.341326    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:17.341989    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:17.342008    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:17.342371    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:17.342408    8315 retry.go:31] will retry after 2.613437366s: waiting for domain to come up
	I1120 20:21:19.957329    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:19.958097    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:19.958115    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:19.958466    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:19.958501    8315 retry.go:31] will retry after 4.105323605s: waiting for domain to come up
	I1120 20:21:24.068938    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069767    8315 main.go:143] libmachine: domain addons-947553 has current primary IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069790    8315 main.go:143] libmachine: found domain IP: 192.168.39.80
	I1120 20:21:24.069802    8315 main.go:143] libmachine: reserving static IP address...
	I1120 20:21:24.070350    8315 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-947553", mac: "52:54:00:7b:a7:2c", ip: "192.168.39.80"} in network mk-addons-947553
	I1120 20:21:24.251658    8315 main.go:143] libmachine: reserved static IP address 192.168.39.80 for domain addons-947553
	I1120 20:21:24.251676    8315 main.go:143] libmachine: waiting for SSH...
	I1120 20:21:24.251682    8315 main.go:143] libmachine: Getting to WaitForSSH function...
	I1120 20:21:24.254839    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255480    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.255507    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255698    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.255932    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.255946    8315 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1120 20:21:24.357511    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.357947    8315 main.go:143] libmachine: domain creation complete
	I1120 20:21:24.359373    8315 machine.go:94] provisionDockerMachine start ...
	I1120 20:21:24.361503    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.361927    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.361949    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.362121    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.362368    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.362381    8315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:21:24.462018    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1120 20:21:24.462045    8315 buildroot.go:166] provisioning hostname "addons-947553"
	I1120 20:21:24.464884    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465302    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.465327    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465556    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.465783    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.465796    8315 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-947553 && echo "addons-947553" | sudo tee /etc/hostname
	I1120 20:21:24.590591    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947553
	
	I1120 20:21:24.593332    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593716    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.593739    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593959    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.594201    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.594220    8315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-947553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-947553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-947553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:21:24.704349    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.704375    8315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 20:21:24.704425    8315 buildroot.go:174] setting up certificates
	I1120 20:21:24.704437    8315 provision.go:84] configureAuth start
	I1120 20:21:24.707018    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.707382    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.707405    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709518    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709819    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.709844    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709960    8315 provision.go:143] copyHostCerts
	I1120 20:21:24.710021    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 20:21:24.710131    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 20:21:24.710204    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 20:21:24.710279    8315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.addons-947553 san=[127.0.0.1 192.168.39.80 addons-947553 localhost minikube]
	I1120 20:21:24.868893    8315 provision.go:177] copyRemoteCerts
	I1120 20:21:24.868955    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:21:24.871421    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.871778    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.871813    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.872001    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:24.954555    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:21:24.986020    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:21:25.016669    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:21:25.046712    8315 provision.go:87] duration metric: took 342.262806ms to configureAuth
	I1120 20:21:25.046739    8315 buildroot.go:189] setting minikube options for container-runtime
	I1120 20:21:25.046974    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:25.049642    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050132    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.050155    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050331    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.050555    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.050571    8315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:21:25.295480    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:21:25.295505    8315 machine.go:97] duration metric: took 936.115627ms to provisionDockerMachine
	I1120 20:21:25.295517    8315 client.go:176] duration metric: took 20.443004703s to LocalClient.Create
	I1120 20:21:25.295530    8315 start.go:167] duration metric: took 20.443049547s to libmachine.API.Create "addons-947553"
	I1120 20:21:25.295539    8315 start.go:293] postStartSetup for "addons-947553" (driver="kvm2")
	I1120 20:21:25.295551    8315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:21:25.295599    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:21:25.298453    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.298889    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.298912    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.299118    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.380706    8315 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:21:25.386067    8315 info.go:137] Remote host: Buildroot 2025.02
	I1120 20:21:25.386096    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 20:21:25.386163    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 20:21:25.386186    8315 start.go:296] duration metric: took 90.641008ms for postStartSetup
	I1120 20:21:25.389037    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389412    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.389432    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389654    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:25.389819    8315 start.go:128] duration metric: took 20.539459484s to createHost
	I1120 20:21:25.392104    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392481    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.392504    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392693    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.392952    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.392965    8315 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 20:21:25.493567    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763670085.456620738
	
	I1120 20:21:25.493591    8315 fix.go:216] guest clock: 1763670085.456620738
	I1120 20:21:25.493598    8315 fix.go:229] Guest: 2025-11-20 20:21:25.456620738 +0000 UTC Remote: 2025-11-20 20:21:25.389830223 +0000 UTC m=+20.636741018 (delta=66.790515ms)
	I1120 20:21:25.493614    8315 fix.go:200] guest clock delta is within tolerance: 66.790515ms
	I1120 20:21:25.493618    8315 start.go:83] releasing machines lock for "addons-947553", held for 20.643324737s
	I1120 20:21:25.496394    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.496731    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.496750    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.497416    8315 ssh_runner.go:195] Run: cat /version.json
	I1120 20:21:25.497480    8315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:21:25.500666    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.500828    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501105    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501135    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501175    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501196    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501333    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.501488    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.605393    8315 ssh_runner.go:195] Run: systemctl --version
	I1120 20:21:25.612006    8315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:21:25.772800    8315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:21:25.780223    8315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:21:25.780282    8315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:21:25.801102    8315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:21:25.801129    8315 start.go:496] detecting cgroup driver to use...
	I1120 20:21:25.801204    8315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:21:25.821353    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:21:25.843177    8315 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:21:25.843231    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:21:25.868522    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:21:25.885911    8315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:21:26.035325    8315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:21:26.252665    8315 docker.go:234] disabling docker service ...
	I1120 20:21:26.252745    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:21:26.269964    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:21:26.285883    8315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:21:26.444730    8315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:21:26.588236    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:21:26.605731    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:21:26.631197    8315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:21:26.631278    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.644989    8315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 20:21:26.645074    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.659053    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.672870    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.687322    8315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:21:26.702284    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.716913    8315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.738871    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.752362    8315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:21:26.763831    8315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1120 20:21:26.763912    8315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1120 20:21:26.789002    8315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:21:26.803924    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:26.952317    8315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:21:27.200343    8315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:21:27.200435    8315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:21:27.206384    8315 start.go:564] Will wait 60s for crictl version
	I1120 20:21:27.206464    8315 ssh_runner.go:195] Run: which crictl
	I1120 20:21:27.211256    8315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 20:21:27.250686    8315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 20:21:27.250789    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.281244    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.453589    8315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1120 20:21:27.519790    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520199    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:27.520222    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520413    8315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1120 20:21:27.525676    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:27.542910    8315 kubeadm.go:884] updating cluster {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:21:27.543059    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:27.543129    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:27.574818    8315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 20:21:27.574926    8315 ssh_runner.go:195] Run: which lz4
	I1120 20:21:27.580276    8315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1120 20:21:27.587089    8315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1120 20:21:27.587120    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1120 20:21:29.151749    8315 crio.go:462] duration metric: took 1.571528535s to copy over tarball
	I1120 20:21:29.151825    8315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1120 20:21:30.840010    8315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.688159594s)
	I1120 20:21:30.840053    8315 crio.go:469] duration metric: took 1.688277204s to extract the tarball
	I1120 20:21:30.840061    8315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1120 20:21:30.882678    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:30.922657    8315 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:21:30.922680    8315 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:21:30.922687    8315 kubeadm.go:935] updating node { 192.168.39.80 8443 v1.34.1 crio true true} ...
	I1120 20:21:30.922783    8315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-947553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:21:30.922874    8315 ssh_runner.go:195] Run: crio config
	I1120 20:21:30.970750    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:30.970771    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:30.970787    8315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:21:30.970807    8315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-947553 NodeName:addons-947553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:21:30.970921    8315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-947553"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.80"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:21:30.970978    8315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:21:30.984115    8315 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:21:30.984179    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:21:30.997000    8315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1120 20:21:31.019490    8315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:21:31.040334    8315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1120 20:21:31.062447    8315 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I1120 20:21:31.066873    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:31.082252    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:31.225462    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:31.260197    8315 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553 for IP: 192.168.39.80
	I1120 20:21:31.260217    8315 certs.go:195] generating shared ca certs ...
	I1120 20:21:31.260232    8315 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.260386    8315 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 20:21:31.565609    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt ...
	I1120 20:21:31.565637    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt: {Name:mkbaf0e14aa61a2ff1b23e3cacd2c256e32e6647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565863    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key ...
	I1120 20:21:31.565878    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key: {Name:mk6aeca1c4b3f3e4ff969d4a1bc1fecc4b0fa343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565998    8315 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 20:21:32.272316    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt ...
	I1120 20:21:32.272345    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt: {Name:mk6e855dc2ded0db05a3455c6e851abbeb92043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272564    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key ...
	I1120 20:21:32.272590    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key: {Name:mkc4fdf928a4209309cd887410d07a4fb9cad8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272702    8315 certs.go:257] generating profile certs ...
	I1120 20:21:32.272778    8315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key
	I1120 20:21:32.272805    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt with IP's: []
	I1120 20:21:32.531299    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt ...
	I1120 20:21:32.531330    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: {Name:mkacef1d43c5fe9ffb1d09b61b8a2a7db2cf094d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531547    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key ...
	I1120 20:21:32.531568    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key: {Name:mk2cb4e6b2267fb750aa726a4e65ebdfb9212cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531675    8315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2
	I1120 20:21:32.531704    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80]
	I1120 20:21:32.818886    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 ...
	I1120 20:21:32.818915    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2: {Name:mk790b39b3d9776066f9b6fb58232a0c1fea8994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819086    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 ...
	I1120 20:21:32.819099    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2: {Name:mk4563c621ceba8c563d34ed8d2ea6985bc21d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819174    8315 certs.go:382] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt
	I1120 20:21:32.819257    8315 certs.go:386] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key
	I1120 20:21:32.819305    8315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key
	I1120 20:21:32.819322    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt with IP's: []
	I1120 20:21:33.229266    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt ...
	I1120 20:21:33.229303    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt: {Name:mk842c9b1c7d59553f9e9969540d37e3f124f603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229499    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key ...
	I1120 20:21:33.229519    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key: {Name:mk774bcb76c9d8c8959c52bd40c6db81e671bce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229746    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 20:21:33.229789    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:21:33.229825    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:21:33.229867    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 20:21:33.230425    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:21:33.262117    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:21:33.298274    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:21:33.335705    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:21:33.369053    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 20:21:33.401973    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:21:33.434941    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:21:33.467052    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:21:33.499463    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:21:33.533326    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:21:33.557271    8315 ssh_runner.go:195] Run: openssl version
	I1120 20:21:33.565199    8315 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.579252    8315 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:21:33.592359    8315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598287    8315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598357    8315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.606765    8315 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:21:33.620434    8315 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:21:33.633673    8315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:21:33.639557    8315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:21:33.639640    8315 kubeadm.go:401] StartCluster: {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:21:33.639719    8315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:21:33.639785    8315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:21:33.678141    8315 cri.go:89] found id: ""
	I1120 20:21:33.678230    8315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:21:33.692525    8315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:21:33.705815    8315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:21:33.718541    8315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:21:33.718560    8315 kubeadm.go:158] found existing configuration files:
	
	I1120 20:21:33.718602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:21:33.730012    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:21:33.730084    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:21:33.742602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:21:33.754750    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:21:33.754833    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:21:33.773694    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.789522    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:21:33.789573    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.803646    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:21:33.817663    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:21:33.817714    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:21:33.830895    8315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1120 20:21:34.010421    8315 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:21:45.965962    8315 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:21:45.966043    8315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:21:45.966134    8315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:21:45.966274    8315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:21:45.966402    8315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:21:45.966485    8315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:21:45.968313    8315 out.go:252]   - Generating certificates and keys ...
	I1120 20:21:45.968415    8315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:21:45.968512    8315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:21:45.968625    8315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:21:45.968701    8315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:21:45.968754    8315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:21:45.968819    8315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:21:45.968913    8315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:21:45.969101    8315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969192    8315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:21:45.969314    8315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969371    8315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:21:45.969421    8315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:21:45.969458    8315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:21:45.969504    8315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:21:45.969545    8315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:21:45.969595    8315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:21:45.969637    8315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:21:45.969697    8315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:21:45.969754    8315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:21:45.969823    8315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:21:45.969888    8315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:21:45.971245    8315 out.go:252]   - Booting up control plane ...
	I1120 20:21:45.971330    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:21:45.971396    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:21:45.971453    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:21:45.971554    8315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:21:45.971660    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:21:45.971754    8315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:21:45.971826    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:21:45.971880    8315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:21:45.972014    8315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:21:45.972174    8315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:21:45.972260    8315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.915384ms
	I1120 20:21:45.972339    8315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:21:45.972417    8315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.80:8443/livez
	I1120 20:21:45.972499    8315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:21:45.972565    8315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:21:45.972626    8315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.009474334s
	I1120 20:21:45.972680    8315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.600510793s
	I1120 20:21:45.972745    8315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502310178s
	I1120 20:21:45.972837    8315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:21:45.972964    8315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:21:45.973026    8315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:21:45.973213    8315 kubeadm.go:319] [mark-control-plane] Marking the node addons-947553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:21:45.973262    8315 kubeadm.go:319] [bootstrap-token] Using token: 2xpoj0.3iafwcplk6gzssxo
	I1120 20:21:45.975478    8315 out.go:252]   - Configuring RBAC rules ...
	I1120 20:21:45.975637    8315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:21:45.975749    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:21:45.975873    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:21:45.975991    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:21:45.976087    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:21:45.976159    8315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:21:45.976260    8315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:21:45.976297    8315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:21:45.976339    8315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:21:45.976345    8315 kubeadm.go:319] 
	I1120 20:21:45.976416    8315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:21:45.976432    8315 kubeadm.go:319] 
	I1120 20:21:45.976492    8315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:21:45.976498    8315 kubeadm.go:319] 
	I1120 20:21:45.976524    8315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:21:45.976573    8315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:21:45.976612    8315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:21:45.976618    8315 kubeadm.go:319] 
	I1120 20:21:45.976662    8315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:21:45.976669    8315 kubeadm.go:319] 
	I1120 20:21:45.976708    8315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:21:45.976716    8315 kubeadm.go:319] 
	I1120 20:21:45.976761    8315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:21:45.976832    8315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:21:45.976903    8315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:21:45.976909    8315 kubeadm.go:319] 
	I1120 20:21:45.976975    8315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:21:45.977039    8315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:21:45.977046    8315 kubeadm.go:319] 
	I1120 20:21:45.977115    8315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977197    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 \
	I1120 20:21:45.977222    8315 kubeadm.go:319] 	--control-plane 
	I1120 20:21:45.977228    8315 kubeadm.go:319] 
	I1120 20:21:45.977318    8315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:21:45.977332    8315 kubeadm.go:319] 
	I1120 20:21:45.977426    8315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977559    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 
	I1120 20:21:45.977570    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:45.977577    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:45.978905    8315 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1120 20:21:45.980206    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1120 20:21:45.998278    8315 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1120 20:21:46.024557    8315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:21:46.024640    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.024705    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-947553 minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-947553 minikube.k8s.io/primary=true
	I1120 20:21:46.163608    8315 ops.go:34] apiserver oom_adj: -16
	I1120 20:21:46.163786    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.664084    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.164553    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.664473    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.164635    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.664221    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.163942    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.663901    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.164591    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.290234    8315 kubeadm.go:1114] duration metric: took 4.265649758s to wait for elevateKubeSystemPrivileges
	I1120 20:21:50.290282    8315 kubeadm.go:403] duration metric: took 16.650648707s to StartCluster
	I1120 20:21:50.290306    8315 settings.go:142] acquiring lock: {Name:mke92973c8f33ef32fe11f7b266adf74cd3ec47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.290453    8315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:50.290990    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.291268    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:21:50.291283    8315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:50.291344    8315 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 20:21:50.291469    8315 addons.go:70] Setting gcp-auth=true in profile "addons-947553"
	I1120 20:21:50.291484    8315 addons.go:70] Setting ingress=true in profile "addons-947553"
	I1120 20:21:50.291498    8315 mustload.go:66] Loading cluster: addons-947553
	I1120 20:21:50.291500    8315 addons.go:239] Setting addon ingress=true in "addons-947553"
	I1120 20:21:50.291494    8315 addons.go:70] Setting cloud-spanner=true in profile "addons-947553"
	I1120 20:21:50.291519    8315 addons.go:239] Setting addon cloud-spanner=true in "addons-947553"
	I1120 20:21:50.291525    8315 addons.go:70] Setting registry=true in profile "addons-947553"
	I1120 20:21:50.291542    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291555    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291554    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291565    8315 addons.go:239] Setting addon registry=true in "addons-947553"
	I1120 20:21:50.291594    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291595    8315 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.291607    8315 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-947553"
	I1120 20:21:50.291627    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291692    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291474    8315 addons.go:70] Setting yakd=true in profile "addons-947553"
	I1120 20:21:50.292160    8315 addons.go:239] Setting addon yakd=true in "addons-947553"
	I1120 20:21:50.292192    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292250    8315 addons.go:70] Setting inspektor-gadget=true in profile "addons-947553"
	I1120 20:21:50.292272    8315 addons.go:239] Setting addon inspektor-gadget=true in "addons-947553"
	I1120 20:21:50.292297    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292485    8315 addons.go:70] Setting ingress-dns=true in profile "addons-947553"
	I1120 20:21:50.292520    8315 addons.go:239] Setting addon ingress-dns=true in "addons-947553"
	I1120 20:21:50.292545    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292621    8315 addons.go:70] Setting registry-creds=true in profile "addons-947553"
	I1120 20:21:50.292644    8315 addons.go:239] Setting addon registry-creds=true in "addons-947553"
	I1120 20:21:50.292671    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292677    8315 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-947553"
	I1120 20:21:50.292719    8315 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:21:50.292755    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292807    8315 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-947553"
	I1120 20:21:50.292829    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-947553"
	I1120 20:21:50.292880    8315 addons.go:70] Setting metrics-server=true in profile "addons-947553"
	I1120 20:21:50.292897    8315 addons.go:239] Setting addon metrics-server=true in "addons-947553"
	I1120 20:21:50.292922    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293069    8315 out.go:179] * Verifying Kubernetes components...
	I1120 20:21:50.293281    8315 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.293300    8315 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-947553"
	I1120 20:21:50.293321    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293536    8315 addons.go:70] Setting default-storageclass=true in profile "addons-947553"
	I1120 20:21:50.293556    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-947553"
	I1120 20:21:50.293573    8315 addons.go:70] Setting storage-provisioner=true in profile "addons-947553"
	I1120 20:21:50.293591    8315 addons.go:239] Setting addon storage-provisioner=true in "addons-947553"
	I1120 20:21:50.293613    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293979    8315 addons.go:70] Setting volcano=true in profile "addons-947553"
	I1120 20:21:50.294002    8315 addons.go:239] Setting addon volcano=true in "addons-947553"
	I1120 20:21:50.294026    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294103    8315 addons.go:70] Setting volumesnapshots=true in profile "addons-947553"
	I1120 20:21:50.294122    8315 addons.go:239] Setting addon volumesnapshots=true in "addons-947553"
	I1120 20:21:50.294146    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294465    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:50.297973    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.299952    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 20:21:50.299964    8315 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 20:21:50.300060    8315 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 20:21:50.300093    8315 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 20:21:50.299977    8315 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 20:21:50.301985    8315 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-947553"
	I1120 20:21:50.302030    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.302603    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 20:21:50.303185    8315 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 20:21:50.302631    8315 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:50.303261    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	W1120 20:21:50.302916    8315 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 20:21:50.303040    8315 addons.go:239] Setting addon default-storageclass=true in "addons-947553"
	I1120 20:21:50.303355    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.303953    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 20:21:50.303969    8315 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 20:21:50.303973    8315 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 20:21:50.303953    8315 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 20:21:50.304024    8315 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:50.305543    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 20:21:50.304040    8315 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 20:21:50.304099    8315 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 20:21:50.305800    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 20:21:50.304918    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.304913    8315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:21:50.305899    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:50.307319    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 20:21:50.306014    8315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:50.307351    8315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:21:50.307429    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.307470    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 20:21:50.307480    8315 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1120 20:21:50.306784    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:21:50.307511    8315 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:21:50.306817    8315 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 20:21:50.307620    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.306822    8315 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:50.307695    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 20:21:50.307706    8315 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:50.307716    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 20:21:50.306909    8315 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:50.308092    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 20:21:50.308474    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 20:21:50.308512    8315 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 20:21:50.308524    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 20:21:50.308827    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.308882    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309172    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.309208    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309325    8315 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 20:21:50.309319    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.309343    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:50.309353    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:21:50.309929    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.310172    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.311742    8315 out.go:179]   - Using image docker.io/busybox:stable
	I1120 20:21:50.311746    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 20:21:50.311894    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:50.311914    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 20:21:50.313106    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:50.313128    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 20:21:50.314097    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.314587    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 20:21:50.315478    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.315516    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.316257    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.316610    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 20:21:50.317131    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.317791    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318124    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318489    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.318521    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318877    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.319057    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319200    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319245    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 20:21:50.319767    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319780    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319803    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319808    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320039    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320130    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320260    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320721    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.320726    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321176    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321210    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321308    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321337    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321371    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321267    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321416    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321437    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321401    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321692    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321834    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 20:21:50.321903    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321928    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321951    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322097    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322416    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322441    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322690    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322712    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.322755    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323004    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323171    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.323197    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323359    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324196    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.324226    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324375    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.324536    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 20:21:50.325593    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 20:21:50.325607    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 20:21:50.328078    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328534    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.328557    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328735    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	W1120 20:21:50.476524    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.476558    8315 retry.go:31] will retry after 236.913044ms: ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513415    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513438    8315 retry.go:31] will retry after 367.013463ms: ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513646    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513672    8315 retry.go:31] will retry after 332.960576ms: ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.932554    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:50.932720    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:21:51.133049    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:51.144339    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:51.194458    8315 node_ready.go:35] waiting up to 6m0s for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206010    8315 node_ready.go:49] node "addons-947553" is "Ready"
	I1120 20:21:51.206043    8315 node_ready.go:38] duration metric: took 11.547378ms for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206057    8315 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:21:51.206112    8315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:21:51.317342    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:51.364561    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:51.396520    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 20:21:51.396550    8315 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 20:21:51.401286    8315 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 20:21:51.401312    8315 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 20:21:51.407832    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:51.408939    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:51.438765    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:51.452371    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:51.487541    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:21:51.487567    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 20:21:51.667634    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:51.705278    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 20:21:51.705307    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 20:21:52.073299    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 20:21:52.073332    8315 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 20:21:52.156840    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 20:21:52.156890    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 20:21:52.182216    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 20:21:52.182260    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 20:21:52.289345    8315 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.289373    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 20:21:52.358156    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:21:52.358186    8315 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:21:52.524224    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:52.790466    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 20:21:52.790495    8315 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 20:21:52.867899    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 20:21:52.867926    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 20:21:52.911549    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.970452    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 20:21:52.970488    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 20:21:53.004660    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.004687    8315 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:21:53.165475    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.165505    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 20:21:53.292981    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 20:21:53.293014    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 20:21:53.388236    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 20:21:53.388266    8315 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 20:21:53.476188    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.678912    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.790164    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 20:21:53.790192    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 20:21:53.898000    8315 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:53.898021    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 20:21:54.089534    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 20:21:54.089570    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 20:21:54.326111    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:54.418621    8315 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.485861131s)
	I1120 20:21:54.418657    8315 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1120 20:21:54.662053    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 20:21:54.662081    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 20:21:54.924608    8315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-947553" context rescaled to 1 replicas
	I1120 20:21:55.256603    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 20:21:55.256640    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 20:21:55.513213    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.380124251s)
	I1120 20:21:55.513226    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.368859446s)
	I1120 20:21:55.513320    8315 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.307185785s)
	I1120 20:21:55.513363    8315 api_server.go:72] duration metric: took 5.222046626s to wait for apiserver process to appear ...
	I1120 20:21:55.513378    8315 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:21:55.513400    8315 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I1120 20:21:55.523525    8315 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I1120 20:21:55.528356    8315 api_server.go:141] control plane version: v1.34.1
	I1120 20:21:55.528379    8315 api_server.go:131] duration metric: took 14.994765ms to wait for apiserver health ...
	I1120 20:21:55.528386    8315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:21:55.548383    8315 system_pods.go:59] 10 kube-system pods found
	I1120 20:21:55.548433    8315 system_pods.go:61] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.548445    8315 system_pods.go:61] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548456    8315 system_pods.go:61] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548466    8315 system_pods.go:61] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.548475    8315 system_pods.go:61] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.548481    8315 system_pods.go:61] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.548491    8315 system_pods.go:61] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.548496    8315 system_pods.go:61] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.548506    8315 system_pods.go:61] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.548517    8315 system_pods.go:61] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.548528    8315 system_pods.go:74] duration metric: took 20.135717ms to wait for pod list to return data ...
	I1120 20:21:55.548544    8315 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:21:55.562077    8315 default_sa.go:45] found service account: "default"
	I1120 20:21:55.562106    8315 default_sa.go:55] duration metric: took 13.552829ms for default service account to be created ...
	I1120 20:21:55.562116    8315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:21:55.573516    8315 system_pods.go:86] 10 kube-system pods found
	I1120 20:21:55.573548    8315 system_pods.go:89] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.573556    8315 system_pods.go:89] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573563    8315 system_pods.go:89] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573568    8315 system_pods.go:89] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.573572    8315 system_pods.go:89] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.573584    8315 system_pods.go:89] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.573588    8315 system_pods.go:89] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.573591    8315 system_pods.go:89] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.573595    8315 system_pods.go:89] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.573610    8315 system_pods.go:89] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.573619    8315 system_pods.go:126] duration metric: took 11.497162ms to wait for k8s-apps to be running ...
	I1120 20:21:55.573629    8315 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:21:55.573680    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:21:55.821435    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 20:21:55.821456    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 20:21:56.372153    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 20:21:56.372176    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 20:21:57.167628    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.167657    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 20:21:57.654485    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.724650    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 20:21:57.727763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728228    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:57.728257    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728455    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:57.738040    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.420656069s)
	I1120 20:21:57.738102    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.373508925s)
	I1120 20:21:58.308598    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 20:21:58.564754    8315 addons.go:239] Setting addon gcp-auth=true in "addons-947553"
	I1120 20:21:58.564806    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:58.566499    8315 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 20:21:58.568681    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569089    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:58.569115    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569249    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:58.833314    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.424339116s)
	I1120 20:21:58.833336    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.425455784s)
	I1120 20:21:58.833402    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.394606542s)
	I1120 20:22:00.317183    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.864775691s)
	I1120 20:22:00.317236    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.649563834s)
	I1120 20:22:00.317246    8315 addons.go:480] Verifying addon ingress=true in "addons-947553"
	I1120 20:22:00.317313    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.793066584s)
	I1120 20:22:00.317374    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.405778801s)
	I1120 20:22:00.317401    8315 addons.go:480] Verifying addon registry=true in "addons-947553"
	I1120 20:22:00.317473    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.841250467s)
	I1120 20:22:00.317500    8315 addons.go:480] Verifying addon metrics-server=true in "addons-947553"
	I1120 20:22:00.317549    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.638598976s)
	I1120 20:22:00.318753    8315 out.go:179] * Verifying ingress addon...
	I1120 20:22:00.319477    8315 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-947553 service yakd-dashboard -n yakd-dashboard
	
	I1120 20:22:00.319499    8315 out.go:179] * Verifying registry addon...
	I1120 20:22:00.321062    8315 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 20:22:00.321882    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 20:22:00.330255    8315 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:22:00.330274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:00.330580    8315 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:22:00.330602    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.843037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.862027    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.136755    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.810594192s)
	I1120 20:22:01.136799    8315 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.563097568s)
	W1120 20:22:01.136810    8315 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136824    8315 system_svc.go:56] duration metric: took 5.563190734s WaitForService to wait for kubelet
	I1120 20:22:01.136838    8315 retry.go:31] will retry after 297.745206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136835    8315 kubeadm.go:587] duration metric: took 10.845518493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:22:01.136866    8315 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:22:01.169336    8315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1120 20:22:01.169377    8315 node_conditions.go:123] node cpu capacity is 2
	I1120 20:22:01.169391    8315 node_conditions.go:105] duration metric: took 32.519256ms to run NodePressure ...
	I1120 20:22:01.169403    8315 start.go:242] waiting for startup goroutines ...
	I1120 20:22:01.357701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:01.358795    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.434928    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:22:01.868679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.868782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.346294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.352833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.862753    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.890512    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.996195    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.34165692s)
	I1120 20:22:02.996225    8315 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.429699726s)
	I1120 20:22:02.996254    8315 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:22:02.997930    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:22:02.997950    8315 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 20:22:02.999363    8315 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 20:22:02.999980    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 20:22:03.000816    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 20:22:03.000833    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 20:22:03.047631    8315 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:22:03.047661    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.095774    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 20:22:03.095800    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 20:22:03.172675    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.172696    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 20:22:03.258447    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.328725    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.328999    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:03.506980    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.835051    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.838342    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.009598    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.059484    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.624514335s)
	I1120 20:22:04.342509    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.346146    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:04.552392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.655990    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.397510493s)
	I1120 20:22:04.657251    8315 addons.go:480] Verifying addon gcp-auth=true in "addons-947553"
	I1120 20:22:04.658765    8315 out.go:179] * Verifying gcp-auth addon...
	I1120 20:22:04.660962    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 20:22:04.689345    8315 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 20:22:04.689379    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:04.830184    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.831805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.008119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.171353    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.336728    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.336869    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.517754    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.671439    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.828977    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.832656    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.008324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.167007    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:06.327339    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.505702    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.665077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.831323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.832004    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.005311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.170575    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.326420    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.330401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:07.504324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.665313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.827482    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.830140    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.005717    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.168657    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.325483    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.326808    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:08.508047    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.664546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.828313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.829419    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.004761    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.165417    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.325923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.327133    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.503806    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.665158    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.827304    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.828458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.005165    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.164419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.328020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.328899    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.503540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.665211    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.827565    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.828293    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.007088    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.172637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.329792    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.330515    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:11.506127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.666152    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.832352    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.832833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.009397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.164503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.324601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:12.330001    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.557333    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.690799    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.826246    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.827168    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.004570    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.166124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.330939    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.334724    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.505747    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.664947    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.826640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.827501    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.005488    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.172285    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.325676    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.327874    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:14.505478    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.665377    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.828164    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.828324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.004108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.165356    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.332218    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.345244    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.505401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.665824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.827117    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.827311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.006364    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.177517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.340592    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.341189    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:16.504797    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.664830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.830245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.830443    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.005532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.167264    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.330014    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.331394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:17.559675    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.678477    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.826495    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.832794    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.005502    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.166351    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.327573    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.327734    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:18.503894    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.666269    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.830279    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.832316    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.005728    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.166452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.327371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.329317    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.506362    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.670606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.831060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.832764    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.004618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.166635    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.327601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.327638    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.504392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.665742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.827471    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.829616    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.004605    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.169921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.333272    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.336011    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.504542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.665682    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.825419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.828055    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.004227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.164229    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.326927    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.332370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:22.505033    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.666978    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.834204    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.836963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.168623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.430297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.433691    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.508735    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.667674    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.836886    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.837245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.005900    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.169110    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.326634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.327904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.673297    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.673506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.830570    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.831631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.009064    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.164922    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.325762    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.327935    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.667618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.827414    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.828623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.005073    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.167711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.326679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.327247    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.505503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.665655    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.825436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.828500    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.005840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.167830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.328527    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.328746    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.506666    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.666716    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.832531    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.833632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.006766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.165323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.327708    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.328341    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.506036    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.666241    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.944433    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.944810    8315 kapi.go:107] duration metric: took 28.622926025s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 20:22:29.006863    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.167687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.328145    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:29.504218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.664460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.827372    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.004445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.164822    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.324811    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.504410    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.665044    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.825337    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.004318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.164385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.325406    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.505029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.665134    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.825650    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.004127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.166139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.324701    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.504614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.664944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.825143    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.004577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.165685    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.325974    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.704460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.708873    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.825075    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.004596    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.165867    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.325611    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.504800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.665454    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.825871    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.004177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.164697    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.326110    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.503481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.664737    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.826308    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.004218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.165000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.324326    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.503689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.666782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.005202    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.164053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.325572    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.505330    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.664284    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.825262    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.004289    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.164481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.326051    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.503226    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.664232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.824502    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.004487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.164963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.325878    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.505209    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.664636    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.825100    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.003777    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.165642    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.504393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.664821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.824897    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.004355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.164546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.326024    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.504280    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.664217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.825780    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.005113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.164701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.325297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.504448    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.665577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.824743    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.004833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.165891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.326070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.503696    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.664800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.826756    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.005306    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.164704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.325455    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.505302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.664815    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.824692    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.003742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.164950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.325614    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.664827    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.826405    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.003951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.165370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.325730    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.505387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.664689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.825033    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.004484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.165449    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.325798    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.504952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.665632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.825364    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.003790    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.165543    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.324818    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.504519    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.664630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.003721    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.164517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.326505    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.504416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.664711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.825942    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.004200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.164578    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.325328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.503484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.665421    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.004287    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.164268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.325315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.504380    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.665173    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.825228    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.004294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.165271    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.325922    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.504540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.664739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.825458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.165838    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.325362    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.503610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.664870    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.827535    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.004328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.164077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.324281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.504388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.665303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.825120    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.004586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.164561    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.325150    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.504219    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.664405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.826068    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.004103    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.164821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.325311    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.504506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.664957    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.825313    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.004010    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.164442    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.325029    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.504374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.825231    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.005792    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.325160    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.504029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.663903    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.825149    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.005092    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.164148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.324606    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.506476    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.664372    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.825198    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.005082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.164250    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.326383    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.503808    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.665909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.825874    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.004396    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.164829    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.326451    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.504153    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.664393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.825331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.004168    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.165403    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.325338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.504355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.664961    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.826305    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.003577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.165374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.325222    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.503643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.665037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.824710    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.004671    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.166844    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.325995    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.503907    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.665203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.825349    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.003990    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.163740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.325833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.504450    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.665053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.824804    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.005371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.164513    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.324904    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.504771    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.665389    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.825137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.003665    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.165006    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.325121    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.504075    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.665109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.824752    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.005627    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.165094    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.325074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.504510    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.665363    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.004201    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.165446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.328697    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.504259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.664453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.005404    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.164687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.325987    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.504122    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.664875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.826159    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.003419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.164744    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.325475    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.504220    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.004170    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.164955    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.325525    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.503631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.665991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.825430    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.003813    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.165098    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.325081    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.505315    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.665028    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.824542    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.005048    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.164487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.325020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.505722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.665177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.824929    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.004788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.165203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.324423    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.504085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.664347    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.825592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.007081    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.164221    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.664640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.825185    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.004104    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.165054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.325282    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.503452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.665265    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.824735    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.004695    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.164715    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.325314    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.503892    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.666272    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.824679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.004416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.164791    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.326105    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.504065    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.664586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.825391    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.004785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.164970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.325404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.503939    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.665093    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.824880    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.004871    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.165473    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.325426    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.505660    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.664949    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.825911    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.006475    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.164603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.504419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.664842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.825338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.003647    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.165240    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.326436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.506070    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.664446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.824867    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.005086    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.163951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.325452    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.504677    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.665161    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.826375    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.004842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.164847    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.325155    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.504019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.665239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.824773    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.005740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.165126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.324566    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.504021    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.665217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.825011    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.003550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.165239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.325538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.503904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.664722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.825083    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.004187    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.166259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.504236    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.664582    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.825165    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.003447    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.164432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.504121    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.664009    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.825082    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.004052    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.165479    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.328054    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.504976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.667464    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.824784    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.004256    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.166254    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.329074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.668785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.834378    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.012921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.182382    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.328273    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.512432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.668839    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.828146    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.010373    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.171918    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.327438    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.508687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.668358    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.825953    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.005514    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.169126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.328834    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.508779    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.665012    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.828137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.004394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.166928    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.325934    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.505139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.664302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.826453    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.009232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.164433    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.326221    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.503774    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.668019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.828315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.003923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.171231    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.329115    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.504101    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.665063    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.827549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.008085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.165142    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.325522    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.504378    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.664419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.826131    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.003818    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.169232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.324564    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.504485    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.668374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.828255    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.006466    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.166014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.327358    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.510974    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.670391    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.826816    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.005686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.164891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.328274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.503673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.665805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.825384    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.007673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.164828    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.329991    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.507109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.666970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.827404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.006050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.165530    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.336903    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.508108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.665050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.828179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.004826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.168465    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.327802    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.588926    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.686035    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.836096    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.013912    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.170060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.330109    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.506461    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.666266    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.833355    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.012759    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.165788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.331536    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.544743    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.668681    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.826281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.004579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.164501    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.325301    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.510314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.664541    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.825733    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.005390    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.164631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.325040    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.503952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.666328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.824449    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.004387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.165135    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.504929    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.665257    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.825179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.004248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.164504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.326488    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.504139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.665131    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.825464    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.004233    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.324723    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.505340    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.665910    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.824647    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.004550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.324772    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.504303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.667291    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.825223    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.004148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.164388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.325070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.503625    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.665901    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.826412    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.003441    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.164614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.325319    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.505054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.665324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.825610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.004621    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.165405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.326233    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.503470    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.665016    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.825575    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.004511    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.165472    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.325694    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.504017    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.663700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.825810    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.004323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.165204    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.504535    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.664639    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.825026    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.003739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.165764    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.325045    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.503360    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.664840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.826605    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.003999    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.165275    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.325421    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.504637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.665014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.824766    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.005128    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.164263    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.325333    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.504062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.664931    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.826290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.004640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.164832    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.325901    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.505129    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.664227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.824719    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.004950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.165053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.325360    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.505959    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.664868    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.826277    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.004096    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.164445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.324757    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.505252    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.665119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.824454    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.004909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.165591    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.325118    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.507564    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.664700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.826799    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.005349    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.165155    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.324582    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.504443    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.665778    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.825741    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.004414    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.164474    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.326066    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.503776    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.664979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.826056    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.003318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.164124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.324310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.503413    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.664606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.824831    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.004542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.165571    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.325290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.503944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.666366    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.825256    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.003826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.165200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.324763    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.505835    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.665113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.824632    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.004172    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.164462    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.324992    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.503686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.664930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.825754    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.004000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.163782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.325549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.504780    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.665314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.825684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.004180    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.164082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.324141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.504612    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.664748    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.825910    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.004630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.325684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.504463    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.664189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.824224    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.004212    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.165015    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.324331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.507504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.664678    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.826028    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.004824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.165312    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.325310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.503525    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.664637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.825538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.005397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.165397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.324350    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.504613    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.665640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.825950    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.004189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.167663    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.326720    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.508041    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.665546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.828365    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.004058    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.165184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.325634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.504817    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.668489    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.828972    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.005704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.167268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.334698    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.507751    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.667328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.831249    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.005669    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.167145    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.328610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.504643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.666213    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.830891    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.006991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.167023    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.326125    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.512788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.665384    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.829776    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.003972    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.170397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.324898    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.505825    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.665603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.827634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.007579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.168453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.327180    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.503837    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.665184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.824592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.005482    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.164766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.330141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.504539    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.667427    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.835328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.139729    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.240898    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.326048    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.505595    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.670610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.827986    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.007659    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.164981    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.331893    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.505078    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.665057    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.824303    8315 kapi.go:107] duration metric: took 2m26.503242857s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 20:24:27.004029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.164962    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:27.504834    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.668267    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.007248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.166983    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.507055    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.666163    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.005997    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.328979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.505976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.669956    8315 kapi.go:107] duration metric: took 2m25.008991629s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 20:24:29.672108    8315 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-947553 cluster.
	I1120 20:24:29.673437    8315 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 20:24:29.674752    8315 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 20:24:30.011875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:30.506718    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.005946    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.508062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.004768    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.513385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.006643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.504200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:34.004984    8315 kapi.go:107] duration metric: took 2m31.004999967s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 20:24:34.006745    8315 out.go:179] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1120 20:24:34.007905    8315 addons.go:515] duration metric: took 2m43.716565511s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1120 20:24:34.007942    8315 start.go:247] waiting for cluster config update ...
	I1120 20:24:34.007968    8315 start.go:256] writing updated cluster config ...
	I1120 20:24:34.008267    8315 ssh_runner.go:195] Run: rm -f paused
	I1120 20:24:34.016789    8315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:34.020696    8315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.026522    8315 pod_ready.go:94] pod "coredns-66bc5c9577-tpfkd" is "Ready"
	I1120 20:24:34.026545    8315 pod_ready.go:86] duration metric: took 5.821939ms for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.029616    8315 pod_ready.go:83] waiting for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.035420    8315 pod_ready.go:94] pod "etcd-addons-947553" is "Ready"
	I1120 20:24:34.035447    8315 pod_ready.go:86] duration metric: took 5.807107ms for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.038012    8315 pod_ready.go:83] waiting for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.042359    8315 pod_ready.go:94] pod "kube-apiserver-addons-947553" is "Ready"
	I1120 20:24:34.042389    8315 pod_ready.go:86] duration metric: took 4.353428ms for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.045156    8315 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.421067    8315 pod_ready.go:94] pod "kube-controller-manager-addons-947553" is "Ready"
	I1120 20:24:34.421095    8315 pod_ready.go:86] duration metric: took 375.9154ms for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.622667    8315 pod_ready.go:83] waiting for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.021658    8315 pod_ready.go:94] pod "kube-proxy-92nmr" is "Ready"
	I1120 20:24:35.021685    8315 pod_ready.go:86] duration metric: took 398.990446ms for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.222270    8315 pod_ready.go:83] waiting for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621176    8315 pod_ready.go:94] pod "kube-scheduler-addons-947553" is "Ready"
	I1120 20:24:35.621208    8315 pod_ready.go:86] duration metric: took 398.900241ms for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621225    8315 pod_ready.go:40] duration metric: took 1.604402122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:35.668514    8315 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:24:35.670410    8315 out.go:179] * Done! kubectl is now configured to use "addons-947553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.375172615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670690375146221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34a4f926-ac07-4b00-aa9c-df4ec780f65d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.376241896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ab7b167-970f-4728-ad8a-544f3d95b7ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.376298982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ab7b167-970f-4728-ad8a-544f3d95b7ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.376849739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kuber
netes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5d
e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-19
3f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6
aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d
87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e7
9661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ab7b167-970f-4728-ad8a-544f3d95b7ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.412720516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a08726c-f2cd-410d-8e14-1f1c37c0b899 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.412951567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a08726c-f2cd-410d-8e14-1f1c37c0b899 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.414384546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=490b9cdb-30be-4132-98dc-b1f8c8984c96 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.415893169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670690415868054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=490b9cdb-30be-4132-98dc-b1f8c8984c96 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.417359277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=208734ab-c103-4d37-992a-9aacee284ba8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.417657626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=208734ab-c103-4d37-992a-9aacee284ba8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.418164806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kuber
netes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5d
e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-19
3f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6
aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d
87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e7
9661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=208734ab-c103-4d37-992a-9aacee284ba8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.448786136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6496446-ac85-4af8-8266-96f19f2236c2 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.449107883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6496446-ac85-4af8-8266-96f19f2236c2 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.454927646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5e7242c-7958-4e33-a9c9-d6ad58206439 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.458274674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670690458184360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5e7242c-7958-4e33-a9c9-d6ad58206439 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.459614125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6eb8e82-f529-4c1b-a6d3-a1c6af9575d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.459814994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6eb8e82-f529-4c1b-a6d3-a1c6af9575d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.460286334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kuber
netes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5d
e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-19
3f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6
aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d
87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e7
9661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6eb8e82-f529-4c1b-a6d3-a1c6af9575d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.492584583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=389e628f-f3ce-4c5f-81be-fbe679edd46d name=/runtime.v1.RuntimeService/Version
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.492772434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=389e628f-f3ce-4c5f-81be-fbe679edd46d name=/runtime.v1.RuntimeService/Version
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.494076422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51c35182-e240-41cc-b7cc-6572d30bb5e2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.495199565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670690495170916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51c35182-e240-41cc-b7cc-6572d30bb5e2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.495990180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6efd7abf-7d52-42c8-93d9-11c36dda4595 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.496070553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6efd7abf-7d52-42c8-93d9-11c36dda4595 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:31:30 addons-947553 crio[815]: time="2025-11-20 20:31:30.496634043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kuber
netes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5d
e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-19
3f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6
aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d
87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e7
9661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6efd7abf-7d52-42c8-93d9-11c36dda4595 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	83c7cffc192d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   30b4f748049f4       busybox                                    default
	1182df9d08d19       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	3c592e1a3ecfd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	a26090ac24452       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	d3d8b65697554       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             7 minutes ago       Running             controller                               0                   0a1212c05ea88       ingress-nginx-controller-6c8bf45fb-6hpj8   ingress-nginx
	a781be0336bcb       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	c7f17ef5a5382       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	fb8563d67522d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   367d0442cb7aa       csi-hostpath-resizer-0                     kube-system
	68eba1ff29e5c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   77498a7d4320e       csi-hostpath-attacher-0                    kube-system
	4189eecca6982       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   64e4a94a11b34       snapshot-controller-7d9fbc56b8-7n9bg       kube-system
	b13c5a7e788c0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	ebdc020b24013       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   7 minutes ago       Exited              patch                                    0                   aab95fc7e29c5       ingress-nginx-admission-patch-xqmtg        ingress-nginx
	30d944607d06d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   f811a556e9729       snapshot-controller-7d9fbc56b8-944pl       kube-system
	cf24d40d09d97       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   7 minutes ago       Exited              create                                   0                   b81a00087e290       ingress-nginx-admission-create-whk72       ingress-nginx
	3ed48acc4e6b6       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               9 minutes ago       Running             minikube-ingress-dns                     0                   e08ae02d97821       kube-ingress-dns-minikube                  kube-system
	1f0a03ae88dd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             9 minutes ago       Running             storage-provisioner                      0                   7a8aea6b56873       storage-provisioner                        kube-system
	dc04223232fbc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     9 minutes ago       Running             amd-gpu-device-plugin                    0                   1c75fb61317d9       amd-gpu-device-plugin-sl95v                kube-system
	44ea167ad7358       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             9 minutes ago       Running             coredns                                  0                   1b8aec92deac0       coredns-66bc5c9577-tpfkd                   kube-system
	107772b7cd302       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             9 minutes ago       Running             kube-proxy                               0                   44459bb4c1592       kube-proxy-92nmr                           kube-system
	1d2feff972c82       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             9 minutes ago       Running             kube-scheduler                           0                   7854300bd65f2       kube-scheduler-addons-947553               kube-system
	3ce144c0d06ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             9 minutes ago       Running             kube-apiserver                           0                   c0df804390cc3       kube-apiserver-addons-947553               kube-system
	3f04fbc5a9a9d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             9 minutes ago       Running             kube-controller-manager                  0                   c73098b299e79       kube-controller-manager-addons-947553      kube-system
	1b4f51aca4917       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             9 minutes ago       Running             etcd                                     0                   959ac70855500       etcd-addons-947553                         kube-system
	
	
	==> coredns [44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86] <==
	[INFO] 10.244.0.8:38281 - 13381 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000419309s
	[INFO] 10.244.0.8:38281 - 4239 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000335145s
	[INFO] 10.244.0.8:38281 - 63093 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000099875s
	[INFO] 10.244.0.8:38281 - 4801 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008321s
	[INFO] 10.244.0.8:38281 - 39674 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000264028s
	[INFO] 10.244.0.8:38281 - 62546 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124048s
	[INFO] 10.244.0.8:38281 - 16805 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000647057s
	[INFO] 10.244.0.8:51997 - 13985 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160466s
	[INFO] 10.244.0.8:51997 - 14298 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000220652s
	[INFO] 10.244.0.8:45076 - 61133 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125223s
	[INFO] 10.244.0.8:45076 - 60865 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000152664s
	[INFO] 10.244.0.8:36522 - 44178 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060404s
	[INFO] 10.244.0.8:36522 - 43995 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078705s
	[INFO] 10.244.0.8:59475 - 4219 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116054s
	[INFO] 10.244.0.8:59475 - 4422 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010261s
	[INFO] 10.244.0.23:44890 - 42394 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000390546s
	[INFO] 10.244.0.23:40413 - 38581 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001287022s
	[INFO] 10.244.0.23:48952 - 288 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001963576s
	[INFO] 10.244.0.23:45971 - 54062 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002169261s
	[INFO] 10.244.0.23:46787 - 19498 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139649s
	[INFO] 10.244.0.23:50609 - 21977 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067547s
	[INFO] 10.244.0.23:44756 - 29378 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.005330443s
	[INFO] 10.244.0.23:59657 - 39385 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005346106s
	[INFO] 10.244.0.27:42107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000463345s
	[INFO] 10.244.0.27:53096 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000254044s
	
	
	==> describe nodes <==
	Name:               addons-947553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-947553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-947553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-947553
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-947553"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:21:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-947553
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:31:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    addons-947553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ab490c5e4f046af88ecdee8117466b4
	  System UUID:                2ab490c5-e4f0-46af-88ec-dee8117466b4
	  Boot ID:                    1ea0245c-4d70-493b-9a36-f639a36dba5f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m54s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-6hpj8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         9m31s
	  kube-system                 amd-gpu-device-plugin-sl95v                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kube-system                 coredns-66bc5c9577-tpfkd                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m40s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m28s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m28s
	  kube-system                 csi-hostpathplugin-xtf7r                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m28s
	  kube-system                 etcd-addons-947553                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m45s
	  kube-system                 kube-apiserver-addons-947553                250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 kube-controller-manager-addons-947553       200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kube-system                 kube-proxy-92nmr                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 kube-scheduler-addons-947553                100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 snapshot-controller-7d9fbc56b8-7n9bg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kube-system                 snapshot-controller-7d9fbc56b8-944pl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m38s  kube-proxy       
	  Normal  Starting                 9m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m45s  kubelet          Node addons-947553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m45s  kubelet          Node addons-947553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m45s  kubelet          Node addons-947553 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m44s  kubelet          Node addons-947553 status is now: NodeReady
	  Normal  RegisteredNode           9m41s  node-controller  Node addons-947553 event: Registered Node addons-947553 in Controller
	
	
	==> dmesg <==
	[  +3.551453] kauditd_printk_skb: 395 callbacks suppressed
	[  +6.168214] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.651247] kauditd_printk_skb: 17 callbacks suppressed
	[Nov20 20:23] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.679825] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.000053] kauditd_printk_skb: 157 callbacks suppressed
	[  +5.059481] kauditd_printk_skb: 109 callbacks suppressed
	[Nov20 20:24] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.445964] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.477031] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.089818] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:25] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.536974] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.509608] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000043] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:26] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.002720] kauditd_printk_skb: 50 callbacks suppressed
	[ +11.737417] kauditd_printk_skb: 103 callbacks suppressed
	[Nov20 20:27] kauditd_printk_skb: 15 callbacks suppressed
	[Nov20 20:28] kauditd_printk_skb: 21 callbacks suppressed
	[Nov20 20:29] kauditd_printk_skb: 9 callbacks suppressed
	[Nov20 20:30] kauditd_printk_skb: 26 callbacks suppressed
	[ +21.384911] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45] <==
	{"level":"info","ts":"2025-11-20T20:23:44.570260Z","caller":"traceutil/trace.go:172","msg":"trace[663488031] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1136; }","duration":"154.066668ms","start":"2025-11-20T20:23:44.416165Z","end":"2025-11-20T20:23:44.570231Z","steps":["trace[663488031] 'read index received'  (duration: 154.021094ms)","trace[663488031] 'applied index is now lower than readState.Index'  (duration: 44.411µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:23:44.570877Z","caller":"traceutil/trace.go:172","msg":"trace[715433296] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"233.967936ms","start":"2025-11-20T20:23:44.336900Z","end":"2025-11-20T20:23:44.570868Z","steps":["trace[715433296] 'process raft request'  (duration: 233.871288ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571611Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.483381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:23:44.571673Z","caller":"traceutil/trace.go:172","msg":"trace[884414279] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1098; }","duration":"111.548598ms","start":"2025-11-20T20:23:44.460117Z","end":"2025-11-20T20:23:44.571666Z","steps":["trace[884414279] 'agreement among raft nodes before linearized reading'  (duration: 111.465445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571061Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.869609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.80\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-11-20T20:23:44.571810Z","caller":"traceutil/trace.go:172","msg":"trace[1446846650] range","detail":"{range_begin:/registry/masterleases/192.168.39.80; range_end:; response_count:1; response_revision:1098; }","duration":"155.64428ms","start":"2025-11-20T20:23:44.416161Z","end":"2025-11-20T20:23:44.571805Z","steps":["trace[1446846650] 'agreement among raft nodes before linearized reading'  (duration: 154.810085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:23:46.528477Z","caller":"traceutil/trace.go:172","msg":"trace[982384876] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"154.809492ms","start":"2025-11-20T20:23:46.373650Z","end":"2025-11-20T20:23:46.528459Z","steps":["trace[982384876] 'process raft request'  (duration: 154.328485ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.123570Z","caller":"traceutil/trace.go:172","msg":"trace[1335763238] linearizableReadLoop","detail":"{readStateIndex:1253; appliedIndex:1253; }","duration":"134.10576ms","start":"2025-11-20T20:24:24.989438Z","end":"2025-11-20T20:24:25.123544Z","steps":["trace[1335763238] 'read index received'  (duration: 134.100119ms)","trace[1335763238] 'applied index is now lower than readState.Index'  (duration: 5.092µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:25.123838Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.381481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-11-20T20:24:25.123864Z","caller":"traceutil/trace.go:172","msg":"trace[1178674559] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1204; }","duration":"134.473479ms","start":"2025-11-20T20:24:24.989384Z","end":"2025-11-20T20:24:25.123857Z","steps":["trace[1178674559] 'agreement among raft nodes before linearized reading'  (duration: 134.302699ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:24:25.124126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.465459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:25.124145Z","caller":"traceutil/trace.go:172","msg":"trace[392254424] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1205; }","duration":"131.486967ms","start":"2025-11-20T20:24:24.992652Z","end":"2025-11-20T20:24:25.124139Z","steps":["trace[392254424] 'agreement among raft nodes before linearized reading'  (duration: 131.453666ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.124311Z","caller":"traceutil/trace.go:172","msg":"trace[1682962710] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"237.606056ms","start":"2025-11-20T20:24:24.886699Z","end":"2025-11-20T20:24:25.124305Z","steps":["trace[1682962710] 'process raft request'  (duration: 237.320378ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.314678Z","caller":"traceutil/trace.go:172","msg":"trace[1797119853] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1279; }","duration":"155.702658ms","start":"2025-11-20T20:24:29.158960Z","end":"2025-11-20T20:24:29.314662Z","steps":["trace[1797119853] 'read index received'  (duration: 155.696769ms)","trace[1797119853] 'applied index is now lower than readState.Index'  (duration: 4.683µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:29.314797Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.822209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:29.314815Z","caller":"traceutil/trace.go:172","msg":"trace[163313341] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1230; }","duration":"155.853309ms","start":"2025-11-20T20:24:29.158956Z","end":"2025-11-20T20:24:29.314809Z","steps":["trace[163313341] 'agreement among raft nodes before linearized reading'  (duration: 155.793828ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.315341Z","caller":"traceutil/trace.go:172","msg":"trace[932727743] transaction","detail":"{read_only:false; response_revision:1231; number_of_response:1; }","duration":"158.601334ms","start":"2025-11-20T20:24:29.156731Z","end":"2025-11-20T20:24:29.315333Z","steps":["trace[932727743] 'process raft request'  (duration: 158.264408ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.860975Z","caller":"traceutil/trace.go:172","msg":"trace[570114600] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"232.699788ms","start":"2025-11-20T20:24:38.628262Z","end":"2025-11-20T20:24:38.860962Z","steps":["trace[570114600] 'process raft request'  (duration: 232.584342ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.862428Z","caller":"traceutil/trace.go:172","msg":"trace[1632150606] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"194.825132ms","start":"2025-11-20T20:24:38.667594Z","end":"2025-11-20T20:24:38.862419Z","steps":["trace[1632150606] 'process raft request'  (duration: 194.764757ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:25:59.796917Z","caller":"traceutil/trace.go:172","msg":"trace[1018787678] transaction","detail":"{read_only:false; response_revision:1587; number_of_response:1; }","duration":"178.519957ms","start":"2025-11-20T20:25:59.618371Z","end":"2025-11-20T20:25:59.796891Z","steps":["trace[1018787678] 'process raft request'  (duration: 178.419059ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:26:07.306954Z","caller":"traceutil/trace.go:172","msg":"trace[1832150044] linearizableReadLoop","detail":"{readStateIndex:1696; appliedIndex:1696; }","duration":"207.161975ms","start":"2025-11-20T20:26:07.099774Z","end":"2025-11-20T20:26:07.306936Z","steps":["trace[1832150044] 'read index received'  (duration: 207.151183ms)","trace[1832150044] 'applied index is now lower than readState.Index'  (duration: 6.599µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:26:07.307088Z","caller":"traceutil/trace.go:172","msg":"trace[519307734] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"362.807072ms","start":"2025-11-20T20:26:06.944270Z","end":"2025-11-20T20:26:07.307077Z","steps":["trace[519307734] 'process raft request'  (duration: 362.695059ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:26:07.307192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.369314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3725"}
	{"level":"info","ts":"2025-11-20T20:26:07.307216Z","caller":"traceutil/trace.go:172","msg":"trace[875135275] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:1621; }","duration":"207.439279ms","start":"2025-11-20T20:26:07.099770Z","end":"2025-11-20T20:26:07.307209Z","steps":["trace[875135275] 'agreement among raft nodes before linearized reading'  (duration: 207.290795ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:26:07.307851Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:26:06.944254Z","time spent":"362.881173ms","remote":"127.0.0.1:35880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3014,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:1620 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:2970 >> failure:<request_range:<key:\"/registry/pods/default/registry-test\" > >"}
	
	
	==> kernel <==
	 20:31:30 up 10 min,  0 users,  load average: 0.19, 1.09, 0.89
	Linux addons-947553 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 20:23:00.364867       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1120 20:23:00.365762       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:00.365790       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1120 20:23:00.366969       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1120 20:23:34.247008       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	W1120 20:23:34.253741       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:34.253819       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1120 20:23:34.256485       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	E1120 20:23:34.259388       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	E1120 20:23:34.271232       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	I1120 20:23:34.434058       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1120 20:24:45.470175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50698: use of closed network connection
	E1120 20:24:45.698946       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50724: use of closed network connection
	I1120 20:24:55.153735       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.73.86"}
	I1120 20:25:35.271669       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1120 20:26:07.917022       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1120 20:26:08.188570       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.64.46"}
	E1120 20:29:56.936137       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1120 20:29:56.944298       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1120 20:29:56.956788       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be] <==
	I1120 20:21:49.579336       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:21:54.678834       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1120 20:21:58.672593       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1120 20:22:19.544397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:22:19.546674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1120 20:22:19.546720       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1120 20:22:19.600217       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1120 20:22:19.618675       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 20:22:19.646978       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:22:19.720013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1120 20:22:49.656241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:22:49.730478       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 20:23:19.661239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:23:19.740631       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 20:24:55.213061       1 replica_set.go:587] "Unhandled Error" err="sync \"headlamp/headlamp-6945c6f4d\" failed with pods \"headlamp-6945c6f4d-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I1120 20:24:58.991066       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1120 20:26:18.292121       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1120 20:26:30.134630       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I1120 20:28:38.989345       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E1120 20:30:04.558985       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:30:19.559844       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:30:34.560057       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:30:49.561233       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:31:04.562259       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1120 20:31:19.562430       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf] <==
	I1120 20:21:51.944081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:21:52.047283       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:21:52.059178       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1120 20:21:52.063486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:21:52.317013       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:21:52.317608       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:21:52.319592       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:21:52.353676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:21:52.353988       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:21:52.354004       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:21:52.365989       1 config.go:200] "Starting service config controller"
	I1120 20:21:52.366010       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:21:52.373413       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:21:52.373476       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:21:52.373601       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:21:52.373606       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:21:52.404955       1 config.go:309] "Starting node config controller"
	I1120 20:21:52.405179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:21:52.405460       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:21:52.474183       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:21:52.474283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:21:52.570175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b] <==
	E1120 20:21:42.658146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:42.658289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:42.658479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:42.659065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:21:42.659191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:42.659355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:42.659676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:21:42.660629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:43.501696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:21:43.568808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:21:43.596853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:43.607731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:21:43.612970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:21:43.637766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:21:43.650165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:43.687838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:21:43.786838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:43.825959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:21:43.878175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:43.895745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:43.953162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:21:43.991210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:44.021889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:21:44.053100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 20:21:46.731200       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:30:27 addons-947553 kubelet[1518]: I1120 20:30:27.292713    1518 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9329f2b-eaa2-4b45-b91d-3433062e9ac0-config-volume\") on node \"addons-947553\" DevicePath \"\""
	Nov 20 20:30:27 addons-947553 kubelet[1518]: I1120 20:30:27.671974    1518 scope.go:117] "RemoveContainer" containerID="7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd"
	Nov 20 20:30:27 addons-947553 kubelet[1518]: I1120 20:30:27.797202    1518 scope.go:117] "RemoveContainer" containerID="7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd"
	Nov 20 20:30:27 addons-947553 kubelet[1518]: E1120 20:30:27.798009    1518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd\": container with ID starting with 7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd not found: ID does not exist" containerID="7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd"
	Nov 20 20:30:27 addons-947553 kubelet[1518]: I1120 20:30:27.798063    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd"} err="failed to get container status \"7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd\": rpc error: code = NotFound desc = could not find container \"7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd\": container with ID starting with 7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd not found: ID does not exist"
	Nov 20 20:30:29 addons-947553 kubelet[1518]: I1120 20:30:29.336051    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9329f2b-eaa2-4b45-b91d-3433062e9ac0" path="/var/lib/kubelet/pods/f9329f2b-eaa2-4b45-b91d-3433062e9ac0/volumes"
	Nov 20 20:30:30 addons-947553 kubelet[1518]: I1120 20:30:30.330190    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:30:35 addons-947553 kubelet[1518]: E1120 20:30:35.730283    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670635729866675  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:30:35 addons-947553 kubelet[1518]: E1120 20:30:35.730719    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670635729866675  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:30:45 addons-947553 kubelet[1518]: E1120 20:30:45.734408    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670645733866824  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:30:45 addons-947553 kubelet[1518]: E1120 20:30:45.734435    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670645733866824  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:30:55 addons-947553 kubelet[1518]: E1120 20:30:55.737850    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670655737450182  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:30:55 addons-947553 kubelet[1518]: E1120 20:30:55.737895    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670655737450182  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:31:04 addons-947553 kubelet[1518]: E1120 20:31:04.813315    1518 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 20 20:31:04 addons-947553 kubelet[1518]: E1120 20:31:04.813393    1518 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 20 20:31:04 addons-947553 kubelet[1518]: E1120 20:31:04.813705    1518 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(3fabe4f4-d0a9-40fe-a635-e27af546a8ce): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:31:04 addons-947553 kubelet[1518]: E1120 20:31:04.813741    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="3fabe4f4-d0a9-40fe-a635-e27af546a8ce"
	Nov 20 20:31:05 addons-947553 kubelet[1518]: E1120 20:31:05.740613    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670665740031587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:31:05 addons-947553 kubelet[1518]: E1120 20:31:05.740639    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670665740031587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:31:12 addons-947553 kubelet[1518]: I1120 20:31:12.330395    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sl95v" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:31:15 addons-947553 kubelet[1518]: E1120 20:31:15.743782    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670675743310825  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:31:15 addons-947553 kubelet[1518]: E1120 20:31:15.743808    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670675743310825  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:31:20 addons-947553 kubelet[1518]: E1120 20:31:20.330235    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="3fabe4f4-d0a9-40fe-a635-e27af546a8ce"
	Nov 20 20:31:25 addons-947553 kubelet[1518]: E1120 20:31:25.747086    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670685746578923  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:31:25 addons-947553 kubelet[1518]: E1120 20:31:25.747113    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670685746578923  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	
	
	==> storage-provisioner [1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806] <==
	W1120 20:31:05.752335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:07.757341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:07.763164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:09.767437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:09.774923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:11.778456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:11.783639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:13.788124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:13.794778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:15.799312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:15.805853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:17.811080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:17.818807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:19.821927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:19.827730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:21.832318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:21.841173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:23.845269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:23.852968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:25.856604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:25.862105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:27.866117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:27.875604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:29.879318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:29.885904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-947553 -n addons-947553
helpers_test.go:269: (dbg) Run:  kubectl --context addons-947553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg: exit status 1 (89.027228ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:26:08 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8bvn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s8bvn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m23s                 default-scheduler  Successfully assigned default/nginx to addons-947553
	  Warning  Failed     117s (x2 over 3m57s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     117s (x2 over 3m57s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    102s (x2 over 3m57s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     102s (x2 over 3m57s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    91s (x3 over 5m23s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:25:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw89l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mw89l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-947553
	  Warning  Failed     2m27s (x2 over 4m58s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m (x3 over 6m2s)      kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     27s (x3 over 4m58s)    kubelet            Error: ErrImagePull
	  Warning  Failed     27s                    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    11s (x3 over 4m57s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     11s (x3 over 4m57s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7w87 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-w7w87:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-whk72" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xqmtg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.903381203s)
--- FAIL: TestAddons/parallel/CSI (386.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-947553 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-947553 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-947553 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.228µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-947553 -n addons-947553
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 logs -n 25: (1.25047924s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │ 20 Nov 25 20:20 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ -o=json --download-only -p download-only-948147 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ --download-only -p binary-mirror-717684 --alsologtostderr --binary-mirror http://127.0.0.1:46607 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ -p binary-mirror-717684                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ addons  │ disable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ addons  │ enable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ start   │ -p addons-947553 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ enable headlamp -p addons-947553 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ ip      │ addons-947553 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                         │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:28 UTC │ 20 Nov 25 20:28 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:04.799759    8315 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:04.799869    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.799880    8315 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:04.799886    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.800101    8315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:21:04.800589    8315 out.go:368] Setting JSON to false
	I1120 20:21:04.801389    8315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":215,"bootTime":1763669850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:21:04.801502    8315 start.go:143] virtualization: kvm guest
	I1120 20:21:04.803491    8315 out.go:179] * [addons-947553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:21:04.804816    8315 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:21:04.804809    8315 notify.go:221] Checking for updates...
	I1120 20:21:04.807406    8315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:04.808794    8315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:04.810101    8315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:04.811420    8315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:21:04.812487    8315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:21:04.813679    8315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:21:04.845057    8315 out.go:179] * Using the kvm2 driver based on user configuration
	I1120 20:21:04.846216    8315 start.go:309] selected driver: kvm2
	I1120 20:21:04.846231    8315 start.go:930] validating driver "kvm2" against <nil>
	I1120 20:21:04.846241    8315 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:21:04.846961    8315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:21:04.847180    8315 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:21:04.847211    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:04.847249    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:04.847263    8315 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1120 20:21:04.847320    8315 start.go:353] cluster config:
	{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1120 20:21:04.847407    8315 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:21:04.848659    8315 out.go:179] * Starting "addons-947553" primary control-plane node in "addons-947553" cluster
	I1120 20:21:04.849659    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:04.849691    8315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:21:04.849701    8315 cache.go:65] Caching tarball of preloaded images
	I1120 20:21:04.849792    8315 preload.go:238] Found /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:21:04.849803    8315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:21:04.850086    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:04.850110    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json: {Name:mk61841fddacaf75a98d91c699b32f9aeeaf9c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:04.850231    8315 start.go:360] acquireMachinesLock for addons-947553: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 20:21:04.850284    8315 start.go:364] duration metric: took 40.752µs to acquireMachinesLock for "addons-947553"
	I1120 20:21:04.850302    8315 start.go:93] Provisioning new machine with config: &{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:04.850352    8315 start.go:125] createHost starting for "" (driver="kvm2")
	I1120 20:21:04.852328    8315 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1120 20:21:04.852480    8315 start.go:159] libmachine.API.Create for "addons-947553" (driver="kvm2")
	I1120 20:21:04.852506    8315 client.go:173] LocalClient.Create starting
	I1120 20:21:04.852580    8315 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem
	I1120 20:21:05.105122    8315 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem
	I1120 20:21:05.182169    8315 main.go:143] libmachine: creating domain...
	I1120 20:21:05.182188    8315 main.go:143] libmachine: creating network...
	I1120 20:21:05.183682    8315 main.go:143] libmachine: found existing default network
	I1120 20:21:05.183926    8315 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.184462    8315 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d98350}
	I1120 20:21:05.184549    8315 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-947553</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.190086    8315 main.go:143] libmachine: creating private network mk-addons-947553 192.168.39.0/24...
	I1120 20:21:05.255182    8315 main.go:143] libmachine: private network mk-addons-947553 192.168.39.0/24 created
	I1120 20:21:05.255605    8315 main.go:143] libmachine: <network>
	  <name>mk-addons-947553</name>
	  <uuid>aa8efef2-a4fa-46da-99ec-8e728046a9cf</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:9d:8a:68'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.255642    8315 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.255667    8315 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1120 20:21:05.255686    8315 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.255775    8315 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21923-3793/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1120 20:21:05.515325    8315 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa...
	I1120 20:21:05.718020    8315 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk...
	I1120 20:21:05.718065    8315 main.go:143] libmachine: Writing magic tar header
	I1120 20:21:05.718104    8315 main.go:143] libmachine: Writing SSH key tar header
	I1120 20:21:05.718203    8315 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.718284    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553
	I1120 20:21:05.718335    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 (perms=drwx------)
	I1120 20:21:05.718363    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines
	I1120 20:21:05.718383    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines (perms=drwxr-xr-x)
	I1120 20:21:05.718404    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.718421    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube (perms=drwxr-xr-x)
	I1120 20:21:05.718438    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793
	I1120 20:21:05.718456    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793 (perms=drwxrwxr-x)
	I1120 20:21:05.718473    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1120 20:21:05.718490    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1120 20:21:05.718505    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1120 20:21:05.718521    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1120 20:21:05.718536    8315 main.go:143] libmachine: checking permissions on dir: /home
	I1120 20:21:05.718549    8315 main.go:143] libmachine: skipping /home - not owner
	I1120 20:21:05.718557    8315 main.go:143] libmachine: defining domain...
	I1120 20:21:05.719886    8315 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:05.727760    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:79:1f:b5 in network default
	I1120 20:21:05.728410    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:05.728434    8315 main.go:143] libmachine: starting domain...
	I1120 20:21:05.728441    8315 main.go:143] libmachine: ensuring networks are active...
	I1120 20:21:05.729136    8315 main.go:143] libmachine: Ensuring network default is active
	I1120 20:21:05.729504    8315 main.go:143] libmachine: Ensuring network mk-addons-947553 is active
	I1120 20:21:05.730087    8315 main.go:143] libmachine: getting domain XML...
	I1120 20:21:05.731121    8315 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <uuid>2ab490c5-e4f0-46af-88ec-dee8117466b4</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:a7:2c'/>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:79:1f:b5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:07.012614    8315 main.go:143] libmachine: waiting for domain to start...
	I1120 20:21:07.013937    8315 main.go:143] libmachine: domain is now running
	I1120 20:21:07.013958    8315 main.go:143] libmachine: waiting for IP...
	I1120 20:21:07.014713    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.015361    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.015380    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.015661    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.015708    8315 retry.go:31] will retry after 270.684091ms: waiting for domain to come up
	I1120 20:21:07.288186    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.288839    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.288865    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.289198    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.289247    8315 retry.go:31] will retry after 384.258097ms: waiting for domain to come up
	I1120 20:21:07.674731    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.675347    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.675362    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.675602    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.675642    8315 retry.go:31] will retry after 325.268494ms: waiting for domain to come up
	I1120 20:21:08.002089    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.002712    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.002729    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.003011    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.003044    8315 retry.go:31] will retry after 532.953777ms: waiting for domain to come up
	I1120 20:21:08.537708    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.538539    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.538554    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.538839    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.538878    8315 retry.go:31] will retry after 671.32775ms: waiting for domain to come up
	I1120 20:21:09.212032    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.212741    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.212765    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.213102    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.213142    8315 retry.go:31] will retry after 640.716702ms: waiting for domain to come up
	I1120 20:21:09.855420    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.856063    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.856083    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.856391    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.856428    8315 retry.go:31] will retry after 715.495515ms: waiting for domain to come up
	I1120 20:21:10.573053    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:10.573668    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:10.573685    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:10.574006    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:10.574049    8315 retry.go:31] will retry after 1.386473849s: waiting for domain to come up
	I1120 20:21:11.962706    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:11.963438    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:11.963454    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:11.963745    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:11.963779    8315 retry.go:31] will retry after 1.671471747s: waiting for domain to come up
	I1120 20:21:13.637832    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:13.638601    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:13.638620    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:13.639009    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:13.639040    8315 retry.go:31] will retry after 1.524844768s: waiting for domain to come up
	I1120 20:21:15.165792    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:15.166517    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:15.166555    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:15.166908    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:15.166949    8315 retry.go:31] will retry after 2.171556586s: waiting for domain to come up
	I1120 20:21:17.341326    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:17.341989    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:17.342008    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:17.342371    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:17.342408    8315 retry.go:31] will retry after 2.613437366s: waiting for domain to come up
	I1120 20:21:19.957329    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:19.958097    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:19.958115    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:19.958466    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:19.958501    8315 retry.go:31] will retry after 4.105323605s: waiting for domain to come up
	I1120 20:21:24.068938    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069767    8315 main.go:143] libmachine: domain addons-947553 has current primary IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069790    8315 main.go:143] libmachine: found domain IP: 192.168.39.80
	I1120 20:21:24.069802    8315 main.go:143] libmachine: reserving static IP address...
	I1120 20:21:24.070350    8315 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-947553", mac: "52:54:00:7b:a7:2c", ip: "192.168.39.80"} in network mk-addons-947553
	I1120 20:21:24.251658    8315 main.go:143] libmachine: reserved static IP address 192.168.39.80 for domain addons-947553
	I1120 20:21:24.251676    8315 main.go:143] libmachine: waiting for SSH...
	I1120 20:21:24.251682    8315 main.go:143] libmachine: Getting to WaitForSSH function...
	I1120 20:21:24.254839    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255480    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.255507    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255698    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.255932    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.255946    8315 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1120 20:21:24.357511    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.357947    8315 main.go:143] libmachine: domain creation complete
	I1120 20:21:24.359373    8315 machine.go:94] provisionDockerMachine start ...
	I1120 20:21:24.361503    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.361927    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.361949    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.362121    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.362368    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.362381    8315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:21:24.462018    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1120 20:21:24.462045    8315 buildroot.go:166] provisioning hostname "addons-947553"
	I1120 20:21:24.464884    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465302    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.465327    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465556    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.465783    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.465796    8315 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-947553 && echo "addons-947553" | sudo tee /etc/hostname
	I1120 20:21:24.590591    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947553
	
	I1120 20:21:24.593332    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593716    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.593739    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593959    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.594201    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.594220    8315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-947553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-947553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-947553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:21:24.704349    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.704375    8315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 20:21:24.704425    8315 buildroot.go:174] setting up certificates
	I1120 20:21:24.704437    8315 provision.go:84] configureAuth start
	I1120 20:21:24.707018    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.707382    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.707405    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709518    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709819    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.709844    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709960    8315 provision.go:143] copyHostCerts
	I1120 20:21:24.710021    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 20:21:24.710131    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 20:21:24.710204    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 20:21:24.710279    8315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.addons-947553 san=[127.0.0.1 192.168.39.80 addons-947553 localhost minikube]
	I1120 20:21:24.868893    8315 provision.go:177] copyRemoteCerts
	I1120 20:21:24.868955    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:21:24.871421    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.871778    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.871813    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.872001    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:24.954555    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:21:24.986020    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:21:25.016669    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:21:25.046712    8315 provision.go:87] duration metric: took 342.262806ms to configureAuth
	I1120 20:21:25.046739    8315 buildroot.go:189] setting minikube options for container-runtime
	I1120 20:21:25.046974    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:25.049642    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050132    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.050155    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050331    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.050555    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.050571    8315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:21:25.295480    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:21:25.295505    8315 machine.go:97] duration metric: took 936.115627ms to provisionDockerMachine
	I1120 20:21:25.295517    8315 client.go:176] duration metric: took 20.443004703s to LocalClient.Create
	I1120 20:21:25.295530    8315 start.go:167] duration metric: took 20.443049547s to libmachine.API.Create "addons-947553"
	I1120 20:21:25.295539    8315 start.go:293] postStartSetup for "addons-947553" (driver="kvm2")
	I1120 20:21:25.295551    8315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:21:25.295599    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:21:25.298453    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.298889    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.298912    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.299118    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.380706    8315 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:21:25.386067    8315 info.go:137] Remote host: Buildroot 2025.02
	I1120 20:21:25.386096    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 20:21:25.386163    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 20:21:25.386186    8315 start.go:296] duration metric: took 90.641008ms for postStartSetup
	I1120 20:21:25.389037    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389412    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.389432    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389654    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:25.389819    8315 start.go:128] duration metric: took 20.539459484s to createHost
	I1120 20:21:25.392104    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392481    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.392504    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392693    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.392952    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.392965    8315 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 20:21:25.493567    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763670085.456620738
	
	I1120 20:21:25.493591    8315 fix.go:216] guest clock: 1763670085.456620738
	I1120 20:21:25.493598    8315 fix.go:229] Guest: 2025-11-20 20:21:25.456620738 +0000 UTC Remote: 2025-11-20 20:21:25.389830223 +0000 UTC m=+20.636741018 (delta=66.790515ms)
	I1120 20:21:25.493614    8315 fix.go:200] guest clock delta is within tolerance: 66.790515ms
	I1120 20:21:25.493618    8315 start.go:83] releasing machines lock for "addons-947553", held for 20.643324737s
	I1120 20:21:25.496394    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.496731    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.496750    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.497416    8315 ssh_runner.go:195] Run: cat /version.json
	I1120 20:21:25.497480    8315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:21:25.500666    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.500828    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501105    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501135    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501175    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501196    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501333    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.501488    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.605393    8315 ssh_runner.go:195] Run: systemctl --version
	I1120 20:21:25.612006    8315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:21:25.772800    8315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:21:25.780223    8315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:21:25.780282    8315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:21:25.801102    8315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:21:25.801129    8315 start.go:496] detecting cgroup driver to use...
	I1120 20:21:25.801204    8315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:21:25.821353    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:21:25.843177    8315 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:21:25.843231    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:21:25.868522    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:21:25.885911    8315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:21:26.035325    8315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:21:26.252665    8315 docker.go:234] disabling docker service ...
	I1120 20:21:26.252745    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:21:26.269964    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:21:26.285883    8315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:21:26.444730    8315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:21:26.588236    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:21:26.605731    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:21:26.631197    8315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:21:26.631278    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.644989    8315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 20:21:26.645074    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.659053    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.672870    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.687322    8315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:21:26.702284    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.716913    8315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.738871    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.752362    8315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:21:26.763831    8315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1120 20:21:26.763912    8315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1120 20:21:26.789002    8315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:21:26.803924    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:26.952317    8315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:21:27.200343    8315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:21:27.200435    8315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:21:27.206384    8315 start.go:564] Will wait 60s for crictl version
	I1120 20:21:27.206464    8315 ssh_runner.go:195] Run: which crictl
	I1120 20:21:27.211256    8315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 20:21:27.250686    8315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 20:21:27.250789    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.281244    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.453589    8315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1120 20:21:27.519790    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520199    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:27.520222    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520413    8315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1120 20:21:27.525676    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:27.542910    8315 kubeadm.go:884] updating cluster {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:21:27.543059    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:27.543129    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:27.574818    8315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 20:21:27.574926    8315 ssh_runner.go:195] Run: which lz4
	I1120 20:21:27.580276    8315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1120 20:21:27.587089    8315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1120 20:21:27.587120    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1120 20:21:29.151749    8315 crio.go:462] duration metric: took 1.571528535s to copy over tarball
	I1120 20:21:29.151825    8315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1120 20:21:30.840010    8315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.688159594s)
	I1120 20:21:30.840053    8315 crio.go:469] duration metric: took 1.688277204s to extract the tarball
	I1120 20:21:30.840061    8315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1120 20:21:30.882678    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:30.922657    8315 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:21:30.922680    8315 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:21:30.922687    8315 kubeadm.go:935] updating node { 192.168.39.80 8443 v1.34.1 crio true true} ...
	I1120 20:21:30.922783    8315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-947553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:21:30.922874    8315 ssh_runner.go:195] Run: crio config
	I1120 20:21:30.970750    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:30.970771    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:30.970787    8315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:21:30.970807    8315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-947553 NodeName:addons-947553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:21:30.970921    8315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-947553"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.80"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:21:30.970978    8315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:21:30.984115    8315 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:21:30.984179    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:21:30.997000    8315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1120 20:21:31.019490    8315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:21:31.040334    8315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1120 20:21:31.062447    8315 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I1120 20:21:31.066873    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:31.082252    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:31.225462    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:31.260197    8315 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553 for IP: 192.168.39.80
	I1120 20:21:31.260217    8315 certs.go:195] generating shared ca certs ...
	I1120 20:21:31.260232    8315 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.260386    8315 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 20:21:31.565609    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt ...
	I1120 20:21:31.565637    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt: {Name:mkbaf0e14aa61a2ff1b23e3cacd2c256e32e6647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565863    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key ...
	I1120 20:21:31.565878    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key: {Name:mk6aeca1c4b3f3e4ff969d4a1bc1fecc4b0fa343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565998    8315 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 20:21:32.272316    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt ...
	I1120 20:21:32.272345    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt: {Name:mk6e855dc2ded0db05a3455c6e851abbeb92043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272564    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key ...
	I1120 20:21:32.272590    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key: {Name:mkc4fdf928a4209309cd887410d07a4fb9cad8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272702    8315 certs.go:257] generating profile certs ...
	I1120 20:21:32.272778    8315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key
	I1120 20:21:32.272805    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt with IP's: []
	I1120 20:21:32.531299    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt ...
	I1120 20:21:32.531330    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: {Name:mkacef1d43c5fe9ffb1d09b61b8a2a7db2cf094d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531547    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key ...
	I1120 20:21:32.531568    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key: {Name:mk2cb4e6b2267fb750aa726a4e65ebdfb9212cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531675    8315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2
	I1120 20:21:32.531704    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80]
	I1120 20:21:32.818886    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 ...
	I1120 20:21:32.818915    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2: {Name:mk790b39b3d9776066f9b6fb58232a0c1fea8994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819086    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 ...
	I1120 20:21:32.819099    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2: {Name:mk4563c621ceba8c563d34ed8d2ea6985bc21d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819174    8315 certs.go:382] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt
	I1120 20:21:32.819257    8315 certs.go:386] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key
	I1120 20:21:32.819305    8315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key
	I1120 20:21:32.819322    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt with IP's: []
	I1120 20:21:33.229266    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt ...
	I1120 20:21:33.229303    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt: {Name:mk842c9b1c7d59553f9e9969540d37e3f124f603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229499    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key ...
	I1120 20:21:33.229519    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key: {Name:mk774bcb76c9d8c8959c52bd40c6db81e671bce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229746    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 20:21:33.229789    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:21:33.229825    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:21:33.229867    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 20:21:33.230425    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:21:33.262117    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:21:33.298274    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:21:33.335705    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:21:33.369053    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 20:21:33.401973    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:21:33.434941    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:21:33.467052    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:21:33.499463    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:21:33.533326    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:21:33.557271    8315 ssh_runner.go:195] Run: openssl version
	I1120 20:21:33.565199    8315 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.579252    8315 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:21:33.592359    8315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598287    8315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598357    8315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.606765    8315 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:21:33.620434    8315 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:21:33.633673    8315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:21:33.639557    8315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:21:33.639640    8315 kubeadm.go:401] StartCluster: {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:21:33.639719    8315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:21:33.639785    8315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:21:33.678141    8315 cri.go:89] found id: ""
	I1120 20:21:33.678230    8315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:21:33.692525    8315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:21:33.705815    8315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:21:33.718541    8315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:21:33.718560    8315 kubeadm.go:158] found existing configuration files:
	
	I1120 20:21:33.718602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:21:33.730012    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:21:33.730084    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:21:33.742602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:21:33.754750    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:21:33.754833    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:21:33.773694    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.789522    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:21:33.789573    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.803646    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:21:33.817663    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:21:33.817714    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:21:33.830895    8315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1120 20:21:34.010421    8315 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:21:45.965962    8315 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:21:45.966043    8315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:21:45.966134    8315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:21:45.966274    8315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:21:45.966402    8315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:21:45.966485    8315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:21:45.968313    8315 out.go:252]   - Generating certificates and keys ...
	I1120 20:21:45.968415    8315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:21:45.968512    8315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:21:45.968625    8315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:21:45.968701    8315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:21:45.968754    8315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:21:45.968819    8315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:21:45.968913    8315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:21:45.969101    8315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969192    8315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:21:45.969314    8315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969371    8315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:21:45.969421    8315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:21:45.969458    8315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:21:45.969504    8315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:21:45.969545    8315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:21:45.969595    8315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:21:45.969637    8315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:21:45.969697    8315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:21:45.969754    8315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:21:45.969823    8315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:21:45.969888    8315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:21:45.971245    8315 out.go:252]   - Booting up control plane ...
	I1120 20:21:45.971330    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:21:45.971396    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:21:45.971453    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:21:45.971554    8315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:21:45.971660    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:21:45.971754    8315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:21:45.971826    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:21:45.971880    8315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:21:45.972014    8315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:21:45.972174    8315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:21:45.972260    8315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.915384ms
	I1120 20:21:45.972339    8315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:21:45.972417    8315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.80:8443/livez
	I1120 20:21:45.972499    8315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:21:45.972565    8315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:21:45.972626    8315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.009474334s
	I1120 20:21:45.972680    8315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.600510793s
	I1120 20:21:45.972745    8315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502310178s
	I1120 20:21:45.972837    8315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:21:45.972964    8315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:21:45.973026    8315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:21:45.973213    8315 kubeadm.go:319] [mark-control-plane] Marking the node addons-947553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:21:45.973262    8315 kubeadm.go:319] [bootstrap-token] Using token: 2xpoj0.3iafwcplk6gzssxo
	I1120 20:21:45.975478    8315 out.go:252]   - Configuring RBAC rules ...
	I1120 20:21:45.975637    8315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:21:45.975749    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:21:45.975873    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:21:45.975991    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:21:45.976087    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:21:45.976159    8315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:21:45.976260    8315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:21:45.976297    8315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:21:45.976339    8315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:21:45.976345    8315 kubeadm.go:319] 
	I1120 20:21:45.976416    8315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:21:45.976432    8315 kubeadm.go:319] 
	I1120 20:21:45.976492    8315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:21:45.976498    8315 kubeadm.go:319] 
	I1120 20:21:45.976524    8315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:21:45.976573    8315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:21:45.976612    8315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:21:45.976618    8315 kubeadm.go:319] 
	I1120 20:21:45.976662    8315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:21:45.976669    8315 kubeadm.go:319] 
	I1120 20:21:45.976708    8315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:21:45.976716    8315 kubeadm.go:319] 
	I1120 20:21:45.976761    8315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:21:45.976832    8315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:21:45.976903    8315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:21:45.976909    8315 kubeadm.go:319] 
	I1120 20:21:45.976975    8315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:21:45.977039    8315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:21:45.977046    8315 kubeadm.go:319] 
	I1120 20:21:45.977115    8315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977197    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 \
	I1120 20:21:45.977222    8315 kubeadm.go:319] 	--control-plane 
	I1120 20:21:45.977228    8315 kubeadm.go:319] 
	I1120 20:21:45.977318    8315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:21:45.977332    8315 kubeadm.go:319] 
	I1120 20:21:45.977426    8315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977559    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 
	I1120 20:21:45.977570    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:45.977577    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:45.978905    8315 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1120 20:21:45.980206    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1120 20:21:45.998278    8315 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1120 20:21:46.024557    8315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:21:46.024640    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.024705    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-947553 minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-947553 minikube.k8s.io/primary=true
	I1120 20:21:46.163608    8315 ops.go:34] apiserver oom_adj: -16
	I1120 20:21:46.163786    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.664084    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.164553    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.664473    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.164635    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.664221    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.163942    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.663901    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.164591    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.290234    8315 kubeadm.go:1114] duration metric: took 4.265649758s to wait for elevateKubeSystemPrivileges
	I1120 20:21:50.290282    8315 kubeadm.go:403] duration metric: took 16.650648707s to StartCluster
	I1120 20:21:50.290306    8315 settings.go:142] acquiring lock: {Name:mke92973c8f33ef32fe11f7b266adf74cd3ec47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.290453    8315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:50.290990    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.291268    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:21:50.291283    8315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:50.291344    8315 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 20:21:50.291469    8315 addons.go:70] Setting gcp-auth=true in profile "addons-947553"
	I1120 20:21:50.291484    8315 addons.go:70] Setting ingress=true in profile "addons-947553"
	I1120 20:21:50.291498    8315 mustload.go:66] Loading cluster: addons-947553
	I1120 20:21:50.291500    8315 addons.go:239] Setting addon ingress=true in "addons-947553"
	I1120 20:21:50.291494    8315 addons.go:70] Setting cloud-spanner=true in profile "addons-947553"
	I1120 20:21:50.291519    8315 addons.go:239] Setting addon cloud-spanner=true in "addons-947553"
	I1120 20:21:50.291525    8315 addons.go:70] Setting registry=true in profile "addons-947553"
	I1120 20:21:50.291542    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291555    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291554    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291565    8315 addons.go:239] Setting addon registry=true in "addons-947553"
	I1120 20:21:50.291594    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291595    8315 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.291607    8315 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-947553"
	I1120 20:21:50.291627    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291692    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291474    8315 addons.go:70] Setting yakd=true in profile "addons-947553"
	I1120 20:21:50.292160    8315 addons.go:239] Setting addon yakd=true in "addons-947553"
	I1120 20:21:50.292192    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292250    8315 addons.go:70] Setting inspektor-gadget=true in profile "addons-947553"
	I1120 20:21:50.292272    8315 addons.go:239] Setting addon inspektor-gadget=true in "addons-947553"
	I1120 20:21:50.292297    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292485    8315 addons.go:70] Setting ingress-dns=true in profile "addons-947553"
	I1120 20:21:50.292520    8315 addons.go:239] Setting addon ingress-dns=true in "addons-947553"
	I1120 20:21:50.292545    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292621    8315 addons.go:70] Setting registry-creds=true in profile "addons-947553"
	I1120 20:21:50.292644    8315 addons.go:239] Setting addon registry-creds=true in "addons-947553"
	I1120 20:21:50.292671    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292677    8315 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-947553"
	I1120 20:21:50.292719    8315 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:21:50.292755    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292807    8315 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-947553"
	I1120 20:21:50.292829    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-947553"
	I1120 20:21:50.292880    8315 addons.go:70] Setting metrics-server=true in profile "addons-947553"
	I1120 20:21:50.292897    8315 addons.go:239] Setting addon metrics-server=true in "addons-947553"
	I1120 20:21:50.292922    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293069    8315 out.go:179] * Verifying Kubernetes components...
	I1120 20:21:50.293281    8315 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.293300    8315 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-947553"
	I1120 20:21:50.293321    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293536    8315 addons.go:70] Setting default-storageclass=true in profile "addons-947553"
	I1120 20:21:50.293556    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-947553"
	I1120 20:21:50.293573    8315 addons.go:70] Setting storage-provisioner=true in profile "addons-947553"
	I1120 20:21:50.293591    8315 addons.go:239] Setting addon storage-provisioner=true in "addons-947553"
	I1120 20:21:50.293613    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293979    8315 addons.go:70] Setting volcano=true in profile "addons-947553"
	I1120 20:21:50.294002    8315 addons.go:239] Setting addon volcano=true in "addons-947553"
	I1120 20:21:50.294026    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294103    8315 addons.go:70] Setting volumesnapshots=true in profile "addons-947553"
	I1120 20:21:50.294122    8315 addons.go:239] Setting addon volumesnapshots=true in "addons-947553"
	I1120 20:21:50.294146    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294465    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:50.297973    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.299952    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 20:21:50.299964    8315 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 20:21:50.300060    8315 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 20:21:50.300093    8315 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 20:21:50.299977    8315 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 20:21:50.301985    8315 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-947553"
	I1120 20:21:50.302030    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.302603    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 20:21:50.303185    8315 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 20:21:50.302631    8315 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:50.303261    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	W1120 20:21:50.302916    8315 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 20:21:50.303040    8315 addons.go:239] Setting addon default-storageclass=true in "addons-947553"
	I1120 20:21:50.303355    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.303953    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 20:21:50.303969    8315 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 20:21:50.303973    8315 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 20:21:50.303953    8315 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 20:21:50.304024    8315 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:50.305543    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 20:21:50.304040    8315 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 20:21:50.304099    8315 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 20:21:50.305800    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 20:21:50.304918    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.304913    8315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:21:50.305899    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:50.307319    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 20:21:50.306014    8315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:50.307351    8315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:21:50.307429    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.307470    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 20:21:50.307480    8315 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1120 20:21:50.306784    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:21:50.307511    8315 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:21:50.306817    8315 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 20:21:50.307620    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.306822    8315 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:50.307695    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 20:21:50.307706    8315 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:50.307716    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 20:21:50.306909    8315 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:50.308092    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 20:21:50.308474    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 20:21:50.308512    8315 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 20:21:50.308524    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 20:21:50.308827    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.308882    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309172    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.309208    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309325    8315 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 20:21:50.309319    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.309343    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:50.309353    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:21:50.309929    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.310172    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.311742    8315 out.go:179]   - Using image docker.io/busybox:stable
	I1120 20:21:50.311746    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 20:21:50.311894    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:50.311914    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 20:21:50.313106    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:50.313128    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 20:21:50.314097    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.314587    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 20:21:50.315478    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.315516    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.316257    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.316610    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 20:21:50.317131    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.317791    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318124    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318489    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.318521    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318877    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.319057    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319200    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319245    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 20:21:50.319767    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319780    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319803    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319808    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320039    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320130    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320260    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320721    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.320726    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321176    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321210    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321308    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321337    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321371    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321267    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321416    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321437    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321401    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321692    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321834    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 20:21:50.321903    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321928    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321951    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322097    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322416    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322441    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322690    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322712    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.322755    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323004    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323171    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.323197    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323359    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324196    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.324226    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324375    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.324536    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 20:21:50.325593    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 20:21:50.325607    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 20:21:50.328078    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328534    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.328557    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328735    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	W1120 20:21:50.476524    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.476558    8315 retry.go:31] will retry after 236.913044ms: ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513415    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513438    8315 retry.go:31] will retry after 367.013463ms: ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513646    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513672    8315 retry.go:31] will retry after 332.960576ms: ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.932554    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:50.932720    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:21:51.133049    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:51.144339    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:51.194458    8315 node_ready.go:35] waiting up to 6m0s for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206010    8315 node_ready.go:49] node "addons-947553" is "Ready"
	I1120 20:21:51.206043    8315 node_ready.go:38] duration metric: took 11.547378ms for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206057    8315 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:21:51.206112    8315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:21:51.317342    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:51.364561    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:51.396520    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 20:21:51.396550    8315 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 20:21:51.401286    8315 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 20:21:51.401312    8315 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 20:21:51.407832    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:51.408939    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:51.438765    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:51.452371    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:51.487541    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:21:51.487567    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 20:21:51.667634    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:51.705278    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 20:21:51.705307    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 20:21:52.073299    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 20:21:52.073332    8315 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 20:21:52.156840    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 20:21:52.156890    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 20:21:52.182216    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 20:21:52.182260    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 20:21:52.289345    8315 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.289373    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 20:21:52.358156    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:21:52.358186    8315 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:21:52.524224    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:52.790466    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 20:21:52.790495    8315 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 20:21:52.867899    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 20:21:52.867926    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 20:21:52.911549    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.970452    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 20:21:52.970488    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 20:21:53.004660    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.004687    8315 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:21:53.165475    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.165505    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 20:21:53.292981    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 20:21:53.293014    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 20:21:53.388236    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 20:21:53.388266    8315 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 20:21:53.476188    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.678912    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.790164    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 20:21:53.790192    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 20:21:53.898000    8315 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:53.898021    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 20:21:54.089534    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 20:21:54.089570    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 20:21:54.326111    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:54.418621    8315 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.485861131s)
	I1120 20:21:54.418657    8315 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1120 20:21:54.662053    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 20:21:54.662081    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 20:21:54.924608    8315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-947553" context rescaled to 1 replicas
	I1120 20:21:55.256603    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 20:21:55.256640    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 20:21:55.513213    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.380124251s)
	I1120 20:21:55.513226    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.368859446s)
	I1120 20:21:55.513320    8315 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.307185785s)
	I1120 20:21:55.513363    8315 api_server.go:72] duration metric: took 5.222046626s to wait for apiserver process to appear ...
	I1120 20:21:55.513378    8315 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:21:55.513400    8315 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I1120 20:21:55.523525    8315 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I1120 20:21:55.528356    8315 api_server.go:141] control plane version: v1.34.1
	I1120 20:21:55.528379    8315 api_server.go:131] duration metric: took 14.994765ms to wait for apiserver health ...
	I1120 20:21:55.528386    8315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:21:55.548383    8315 system_pods.go:59] 10 kube-system pods found
	I1120 20:21:55.548433    8315 system_pods.go:61] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.548445    8315 system_pods.go:61] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548456    8315 system_pods.go:61] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548466    8315 system_pods.go:61] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.548475    8315 system_pods.go:61] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.548481    8315 system_pods.go:61] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.548491    8315 system_pods.go:61] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.548496    8315 system_pods.go:61] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.548506    8315 system_pods.go:61] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.548517    8315 system_pods.go:61] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.548528    8315 system_pods.go:74] duration metric: took 20.135717ms to wait for pod list to return data ...
	I1120 20:21:55.548544    8315 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:21:55.562077    8315 default_sa.go:45] found service account: "default"
	I1120 20:21:55.562106    8315 default_sa.go:55] duration metric: took 13.552829ms for default service account to be created ...
	I1120 20:21:55.562116    8315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:21:55.573516    8315 system_pods.go:86] 10 kube-system pods found
	I1120 20:21:55.573548    8315 system_pods.go:89] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.573556    8315 system_pods.go:89] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573563    8315 system_pods.go:89] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573568    8315 system_pods.go:89] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.573572    8315 system_pods.go:89] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.573584    8315 system_pods.go:89] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.573588    8315 system_pods.go:89] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.573591    8315 system_pods.go:89] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.573595    8315 system_pods.go:89] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.573610    8315 system_pods.go:89] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.573619    8315 system_pods.go:126] duration metric: took 11.497162ms to wait for k8s-apps to be running ...
	I1120 20:21:55.573629    8315 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:21:55.573680    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:21:55.821435    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 20:21:55.821456    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 20:21:56.372153    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 20:21:56.372176    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 20:21:57.167628    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.167657    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 20:21:57.654485    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.724650    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 20:21:57.727763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728228    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:57.728257    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728455    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:57.738040    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.420656069s)
	I1120 20:21:57.738102    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.373508925s)
	I1120 20:21:58.308598    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 20:21:58.564754    8315 addons.go:239] Setting addon gcp-auth=true in "addons-947553"
	I1120 20:21:58.564806    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:58.566499    8315 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 20:21:58.568681    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569089    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:58.569115    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569249    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:58.833314    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.424339116s)
	I1120 20:21:58.833336    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.425455784s)
	I1120 20:21:58.833402    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.394606542s)
	I1120 20:22:00.317183    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.864775691s)
	I1120 20:22:00.317236    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.649563834s)
	I1120 20:22:00.317246    8315 addons.go:480] Verifying addon ingress=true in "addons-947553"
	I1120 20:22:00.317313    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.793066584s)
	I1120 20:22:00.317374    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.405778801s)
	I1120 20:22:00.317401    8315 addons.go:480] Verifying addon registry=true in "addons-947553"
	I1120 20:22:00.317473    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.841250467s)
	I1120 20:22:00.317500    8315 addons.go:480] Verifying addon metrics-server=true in "addons-947553"
	I1120 20:22:00.317549    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.638598976s)
	I1120 20:22:00.318753    8315 out.go:179] * Verifying ingress addon...
	I1120 20:22:00.319477    8315 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-947553 service yakd-dashboard -n yakd-dashboard
	
	I1120 20:22:00.319499    8315 out.go:179] * Verifying registry addon...
	I1120 20:22:00.321062    8315 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 20:22:00.321882    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 20:22:00.330255    8315 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:22:00.330274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:00.330580    8315 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:22:00.330602    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.843037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.862027    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.136755    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.810594192s)
	I1120 20:22:01.136799    8315 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.563097568s)
	W1120 20:22:01.136810    8315 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136824    8315 system_svc.go:56] duration metric: took 5.563190734s WaitForService to wait for kubelet
	I1120 20:22:01.136838    8315 retry.go:31] will retry after 297.745206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136835    8315 kubeadm.go:587] duration metric: took 10.845518493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:22:01.136866    8315 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:22:01.169336    8315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1120 20:22:01.169377    8315 node_conditions.go:123] node cpu capacity is 2
	I1120 20:22:01.169391    8315 node_conditions.go:105] duration metric: took 32.519256ms to run NodePressure ...
	I1120 20:22:01.169403    8315 start.go:242] waiting for startup goroutines ...
	I1120 20:22:01.357701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:01.358795    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.434928    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:22:01.868679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.868782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.346294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.352833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.862753    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.890512    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.996195    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.34165692s)
	I1120 20:22:02.996225    8315 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.429699726s)
	I1120 20:22:02.996254    8315 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:22:02.997930    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:22:02.997950    8315 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 20:22:02.999363    8315 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 20:22:02.999980    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 20:22:03.000816    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 20:22:03.000833    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 20:22:03.047631    8315 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:22:03.047661    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.095774    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 20:22:03.095800    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 20:22:03.172675    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.172696    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 20:22:03.258447    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.328725    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.328999    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:03.506980    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.835051    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.838342    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.009598    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.059484    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.624514335s)
	I1120 20:22:04.342509    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.346146    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:04.552392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.655990    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.397510493s)
	I1120 20:22:04.657251    8315 addons.go:480] Verifying addon gcp-auth=true in "addons-947553"
	I1120 20:22:04.658765    8315 out.go:179] * Verifying gcp-auth addon...
	I1120 20:22:04.660962    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 20:22:04.689345    8315 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 20:22:04.689379    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:04.830184    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.831805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.008119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.171353    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.336728    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.336869    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.517754    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.671439    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.828977    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.832656    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.008324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.167007    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:06.327339    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.505702    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.665077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.831323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.832004    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.005311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.170575    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.326420    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.330401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:07.504324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.665313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.827482    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.830140    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.005717    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.168657    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.325483    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.326808    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:08.508047    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.664546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.828313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.829419    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.004761    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.165417    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.325923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.327133    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.503806    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.665158    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.827304    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.828458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.005165    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.164419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.328020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.328899    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.503540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.665211    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.827565    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.828293    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.007088    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.172637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.329792    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.330515    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:11.506127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.666152    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.832352    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.832833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.009397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.164503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.324601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:12.330001    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.557333    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.690799    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.826246    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.827168    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.004570    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.166124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.330939    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.334724    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.505747    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.664947    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.826640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.827501    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.005488    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.172285    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.325676    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.327874    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:14.505478    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.665377    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.828164    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.828324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.004108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.165356    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.332218    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.345244    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.505401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.665824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.827117    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.827311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.006364    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.177517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.340592    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.341189    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:16.504797    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.664830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.830245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.830443    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.005532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.167264    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.330014    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.331394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:17.559675    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.678477    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.826495    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.832794    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.005502    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.166351    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.327573    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.327734    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:18.503894    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.666269    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.830279    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.832316    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.005728    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.166452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.327371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.329317    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.506362    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.670606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.831060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.832764    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.004618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.166635    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.327601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.327638    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.504392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.665742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.827471    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.829616    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.004605    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.169921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.333272    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.336011    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.504542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.665682    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.825419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.828055    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.004227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.164229    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.326927    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.332370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:22.505033    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.666978    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.834204    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.836963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.168623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.430297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.433691    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.508735    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.667674    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.836886    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.837245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.005900    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.169110    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.326634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.327904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.673297    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.673506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.830570    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.831631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.009064    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.164922    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.325762    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.327935    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.667618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.827414    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.828623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.005073    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.167711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.326679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.327247    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.505503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.665655    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.825436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.828500    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.005840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.167830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.328527    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.328746    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.506666    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.666716    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.832531    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.833632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.006766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.165323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.327708    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.328341    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.506036    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.666241    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.944433    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.944810    8315 kapi.go:107] duration metric: took 28.622926025s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 20:22:29.006863    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.167687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.328145    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:29.504218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.664460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.827372    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.004445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.164822    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.324811    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.504410    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.665044    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.825337    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.004318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.164385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.325406    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.505029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.665134    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.825650    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.004127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.166139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.324701    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.504614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.664944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.825143    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.004577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.165685    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.325974    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.704460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.708873    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.825075    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.004596    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.165867    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.325611    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.504800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.665454    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.825871    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.004177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.164697    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.326110    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.503481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.664737    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.826308    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.004218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.165000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.324326    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.503689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.666782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.005202    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.164053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.325572    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.505330    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.664284    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.825262    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.004289    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.164481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.326051    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.503226    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.664232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.824502    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.004487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.164963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.325878    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.505209    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.664636    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.825100    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.003777    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.165642    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.504393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.664821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.824897    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.004355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.164546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.326024    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.504280    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.664217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.825780    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.005113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.164701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.325297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.504448    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.665577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.824743    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.004833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.165891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.326070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.503696    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.664800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.826756    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.005306    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.164704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.325455    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.505302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.664815    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.824692    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.003742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.164950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.325614    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.664827    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.826405    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.003951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.165370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.325730    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.505387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.664689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.825033    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.004484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.165449    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.325798    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.504952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.665632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.825364    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.003790    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.165543    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.324818    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.504519    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.664630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.003721    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.164517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.326505    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.504416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.664711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.825942    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.004200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.164578    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.325328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.503484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.665421    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.004287    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.164268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.325315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.504380    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.665173    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.825228    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.004294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.165271    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.325922    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.504540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.664739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.825458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.165838    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.325362    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.503610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.664870    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.827535    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.004328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.164077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.324281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.504388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.665303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.825120    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.004586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.164561    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.325150    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.504219    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.664405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.826068    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.004103    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.164821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.325311    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.504506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.664957    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.825313    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.004010    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.164442    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.325029    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.504374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.825231    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.005792    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.325160    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.504029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.663903    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.825149    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.005092    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.164148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.324606    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.506476    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.664372    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.825198    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.005082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.164250    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.326383    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.503808    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.665909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.825874    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.004396    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.164829    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.326451    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.504153    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.664393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.825331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.004168    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.165403    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.325338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.504355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.664961    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.826305    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.003577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.165374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.325222    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.503643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.665037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.824710    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.004671    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.166844    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.325995    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.503907    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.665203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.825349    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.003990    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.163740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.325833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.504450    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.665053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.824804    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.005371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.164513    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.324904    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.504771    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.665389    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.825137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.003665    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.165006    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.325121    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.504075    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.665109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.824752    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.005627    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.165094    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.325074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.504510    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.665363    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.004201    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.165446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.328697    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.504259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.664453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.005404    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.164687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.325987    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.504122    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.664875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.826159    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.003419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.164744    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.325475    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.504220    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.004170    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.164955    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.325525    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.503631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.665991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.825430    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.003813    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.165098    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.325081    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.505315    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.665028    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.824542    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.005048    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.164487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.325020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.505722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.665177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.824929    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.004788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.165203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.324423    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.504085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.664347    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.825592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.007081    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.164221    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.664640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.825185    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.004104    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.165054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.325282    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.503452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.665265    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.824735    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.004695    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.164715    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.325314    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.503892    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.666272    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.824679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.004416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.164791    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.326105    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.504065    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.664586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.825391    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.004785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.164970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.325404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.503939    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.665093    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.824880    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.004871    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.165473    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.325426    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.505660    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.664949    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.825911    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.006475    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.164603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.504419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.664842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.825338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.003647    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.165240    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.326436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.506070    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.664446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.824867    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.005086    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.163951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.325452    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.504677    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.665161    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.826375    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.004842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.164847    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.325155    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.504019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.665239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.824773    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.005740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.165126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.324566    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.504021    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.665217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.825011    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.003550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.165239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.325538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.503904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.664722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.825083    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.004187    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.166259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.504236    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.664582    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.825165    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.003447    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.164432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.504121    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.664009    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.825082    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.004052    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.165479    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.328054    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.504976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.667464    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.824784    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.004256    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.166254    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.329074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.668785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.834378    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.012921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.182382    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.328273    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.512432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.668839    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.828146    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.010373    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.171918    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.327438    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.508687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.668358    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.825953    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.005514    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.169126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.328834    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.508779    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.665012    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.828137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.004394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.166928    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.325934    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.505139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.664302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.826453    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.009232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.164433    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.326221    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.503774    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.668019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.828315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.003923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.171231    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.329115    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.504101    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.665063    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.827549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.008085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.165142    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.325522    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.504378    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.664419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.826131    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.003818    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.169232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.324564    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.504485    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.668374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.828255    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.006466    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.166014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.327358    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.510974    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.670391    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.826816    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.005686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.164891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.328274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.503673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.665805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.825384    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.007673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.164828    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.329991    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.507109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.666970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.827404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.006050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.165530    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.336903    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.508108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.665050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.828179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.004826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.168465    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.327802    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.588926    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.686035    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.836096    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.013912    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.170060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.330109    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.506461    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.666266    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.833355    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.012759    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.165788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.331536    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.544743    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.668681    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.826281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.004579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.164501    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.325301    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.510314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.664541    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.825733    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.005390    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.164631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.325040    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.503952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.666328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.824449    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.004387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.165135    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.504929    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.665257    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.825179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.004248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.164504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.326488    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.504139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.665131    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.825464    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.004233    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.324723    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.505340    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.665910    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.824647    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.004550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.324772    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.504303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.667291    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.825223    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.004148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.164388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.325070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.503625    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.665901    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.826412    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.003441    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.164614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.325319    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.505054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.665324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.825610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.004621    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.165405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.326233    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.503470    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.665016    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.825575    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.004511    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.165472    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.325694    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.504017    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.663700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.825810    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.004323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.165204    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.504535    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.664639    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.825026    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.003739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.165764    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.325045    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.503360    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.664840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.826605    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.003999    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.165275    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.325421    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.504637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.665014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.824766    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.005128    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.164263    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.325333    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.504062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.664931    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.826290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.004640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.164832    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.325901    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.505129    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.664227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.824719    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.004950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.165053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.325360    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.505959    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.664868    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.826277    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.004096    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.164445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.324757    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.505252    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.665119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.824454    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.004909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.165591    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.325118    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.507564    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.664700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.826799    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.005349    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.165155    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.324582    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.504443    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.665778    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.825741    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.004414    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.164474    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.326066    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.503776    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.664979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.826056    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.003318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.164124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.324310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.503413    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.664606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.824831    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.004542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.165571    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.325290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.503944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.666366    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.825256    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.003826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.165200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.324763    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.505835    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.665113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.824632    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.004172    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.164462    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.324992    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.503686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.664930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.825754    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.004000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.163782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.325549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.504780    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.665314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.825684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.004180    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.164082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.324141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.504612    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.664748    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.825910    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.004630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.325684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.504463    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.664189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.824224    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.004212    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.165015    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.324331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.507504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.664678    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.826028    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.004824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.165312    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.325310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.503525    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.664637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.825538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.005397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.165397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.324350    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.504613    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.665640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.825950    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.004189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.167663    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.326720    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.508041    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.665546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.828365    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.004058    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.165184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.325634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.504817    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.668489    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.828972    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.005704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.167268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.334698    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.507751    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.667328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.831249    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.005669    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.167145    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.328610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.504643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.666213    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.830891    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.006991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.167023    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.326125    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.512788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.665384    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.829776    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.003972    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.170397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.324898    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.505825    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.665603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.827634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.007579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.168453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.327180    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.503837    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.665184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.824592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.005482    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.164766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.330141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.504539    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.667427    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.835328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.139729    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.240898    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.326048    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.505595    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.670610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.827986    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.007659    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.164981    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.331893    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.505078    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.665057    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.824303    8315 kapi.go:107] duration metric: took 2m26.503242857s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 20:24:27.004029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.164962    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:27.504834    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.668267    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.007248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.166983    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.507055    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.666163    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.005997    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.328979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.505976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.669956    8315 kapi.go:107] duration metric: took 2m25.008991629s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 20:24:29.672108    8315 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-947553 cluster.
	I1120 20:24:29.673437    8315 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 20:24:29.674752    8315 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 20:24:30.011875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:30.506718    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.005946    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.508062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.004768    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.513385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.006643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.504200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:34.004984    8315 kapi.go:107] duration metric: took 2m31.004999967s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 20:24:34.006745    8315 out.go:179] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1120 20:24:34.007905    8315 addons.go:515] duration metric: took 2m43.716565511s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1120 20:24:34.007942    8315 start.go:247] waiting for cluster config update ...
	I1120 20:24:34.007968    8315 start.go:256] writing updated cluster config ...
	I1120 20:24:34.008267    8315 ssh_runner.go:195] Run: rm -f paused
	I1120 20:24:34.016789    8315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:34.020696    8315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.026522    8315 pod_ready.go:94] pod "coredns-66bc5c9577-tpfkd" is "Ready"
	I1120 20:24:34.026545    8315 pod_ready.go:86] duration metric: took 5.821939ms for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.029616    8315 pod_ready.go:83] waiting for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.035420    8315 pod_ready.go:94] pod "etcd-addons-947553" is "Ready"
	I1120 20:24:34.035447    8315 pod_ready.go:86] duration metric: took 5.807107ms for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.038012    8315 pod_ready.go:83] waiting for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.042359    8315 pod_ready.go:94] pod "kube-apiserver-addons-947553" is "Ready"
	I1120 20:24:34.042389    8315 pod_ready.go:86] duration metric: took 4.353428ms for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.045156    8315 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.421067    8315 pod_ready.go:94] pod "kube-controller-manager-addons-947553" is "Ready"
	I1120 20:24:34.421095    8315 pod_ready.go:86] duration metric: took 375.9154ms for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.622667    8315 pod_ready.go:83] waiting for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.021658    8315 pod_ready.go:94] pod "kube-proxy-92nmr" is "Ready"
	I1120 20:24:35.021685    8315 pod_ready.go:86] duration metric: took 398.990446ms for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.222270    8315 pod_ready.go:83] waiting for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621176    8315 pod_ready.go:94] pod "kube-scheduler-addons-947553" is "Ready"
	I1120 20:24:35.621208    8315 pod_ready.go:86] duration metric: took 398.900241ms for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621225    8315 pod_ready.go:40] duration metric: took 1.604402122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:35.668514    8315 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:24:35.670410    8315 out.go:179] * Done! kubectl is now configured to use "addons-947553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.079767240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670595079739434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13514b66-3dde-4090-a043-b1a7b1c042d0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.080796833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b327320-b1bb-44e9-9a63-4097cbf6796e name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.080939947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b327320-b1bb-44e9-9a63-4097cbf6796e name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.081445237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plu
gin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:
0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85
ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d659
7086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b327320-b1bb-44e9-9a63-4097cbf6796e name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.120752349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef65e695-d3a0-413e-aef7-50f28f19a55a name=/runtime.v1.RuntimeService/Version
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.120947211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef65e695-d3a0-413e-aef7-50f28f19a55a name=/runtime.v1.RuntimeService/Version
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.122605709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2ec5ece-bf9d-4944-aa32-b8ed1a36955f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.123736133Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670595123711821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2ec5ece-bf9d-4944-aa32-b8ed1a36955f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.124851939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98ef2393-349e-4596-885e-d82ba423388a name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.124915341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98ef2393-349e-4596-885e-d82ba423388a name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.125467580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plu
gin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:
0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85
ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d659
7086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98ef2393-349e-4596-885e-d82ba423388a name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.158189115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2dc2519e-8396-4af5-838c-cc45d7c89179 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.158264592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2dc2519e-8396-4af5-838c-cc45d7c89179 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.159907173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f48a23f0-087a-4bd2-9bd0-d0ebe0f34018 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.161355942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670595161328026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f48a23f0-087a-4bd2-9bd0-d0ebe0f34018 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.162269574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ceaed841-8a56-4cd0-8672-c30f79864577 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.162607533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ceaed841-8a56-4cd0-8672-c30f79864577 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.163293363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plu
gin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:
0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85
ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d659
7086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ceaed841-8a56-4cd0-8672-c30f79864577 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.199789765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e38321e7-40a2-4e11-9693-1499ad534e3d name=/runtime.v1.RuntimeService/Version
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.199909258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e38321e7-40a2-4e11-9693-1499ad534e3d name=/runtime.v1.RuntimeService/Version
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.201991792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61277984-9c8c-46a5-af69-d8c2173de578 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.204128954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670595204033205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61277984-9c8c-46a5-af69-d8c2173de578 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.205331649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f6f759e-597f-4405-8ca9-7ad9a32f7023 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.205439837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f6f759e-597f-4405-8ca9-7ad9a32f7023 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:29:55 addons-947553 crio[815]: time="2025-11-20 20:29:55.206047052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plu
gin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:
0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85
ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d659
7086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f6f759e-597f-4405-8ca9-7ad9a32f7023 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	83c7cffc192d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   30b4f748049f4       busybox                                    default
	1182df9d08d19       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          5 minutes ago       Running             csi-snapshotter                          0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	3c592e1a3ecfd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          5 minutes ago       Running             csi-provisioner                          0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	a26090ac24452       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            5 minutes ago       Running             liveness-probe                           0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	d3d8b65697554       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             5 minutes ago       Running             controller                               0                   0a1212c05ea88       ingress-nginx-controller-6c8bf45fb-6hpj8   ingress-nginx
	a781be0336bcb       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           5 minutes ago       Running             hostpath                                 0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	c7f17ef5a5382       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	fb8563d67522d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   367d0442cb7aa       csi-hostpath-resizer-0                     kube-system
	68eba1ff29e5c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   77498a7d4320e       csi-hostpath-attacher-0                    kube-system
	4189eecca6982       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   64e4a94a11b34       snapshot-controller-7d9fbc56b8-7n9bg       kube-system
	b13c5a7e788c0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	ebdc020b24013       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   6 minutes ago       Exited              patch                                    0                   aab95fc7e29c5       ingress-nginx-admission-patch-xqmtg        ingress-nginx
	30d944607d06d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   f811a556e9729       snapshot-controller-7d9fbc56b8-944pl       kube-system
	cf24d40d09d97       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   6 minutes ago       Exited              create                                   0                   b81a00087e290       ingress-nginx-admission-create-whk72       ingress-nginx
	7581f788bba24       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   402b0cbd3903b       local-path-provisioner-648f6765c9-znfrl    local-path-storage
	3ed48acc4e6b6       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               7 minutes ago       Running             minikube-ingress-dns                     0                   e08ae02d97821       kube-ingress-dns-minikube                  kube-system
	1f0a03ae88dd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   7a8aea6b56873       storage-provisioner                        kube-system
	dc04223232fbc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     7 minutes ago       Running             amd-gpu-device-plugin                    0                   1c75fb61317d9       amd-gpu-device-plugin-sl95v                kube-system
	44ea167ad7358       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   1b8aec92deac0       coredns-66bc5c9577-tpfkd                   kube-system
	107772b7cd302       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   44459bb4c1592       kube-proxy-92nmr                           kube-system
	1d2feff972c82       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago       Running             kube-scheduler                           0                   7854300bd65f2       kube-scheduler-addons-947553               kube-system
	3ce144c0d06ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago       Running             kube-apiserver                           0                   c0df804390cc3       kube-apiserver-addons-947553               kube-system
	3f04fbc5a9a9d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago       Running             kube-controller-manager                  0                   c73098b299e79       kube-controller-manager-addons-947553      kube-system
	1b4f51aca4917       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   959ac70855500       etcd-addons-947553                         kube-system
	
	
	==> coredns [44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86] <==
	[INFO] 10.244.0.8:38281 - 13381 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000419309s
	[INFO] 10.244.0.8:38281 - 4239 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000335145s
	[INFO] 10.244.0.8:38281 - 63093 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000099875s
	[INFO] 10.244.0.8:38281 - 4801 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008321s
	[INFO] 10.244.0.8:38281 - 39674 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000264028s
	[INFO] 10.244.0.8:38281 - 62546 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124048s
	[INFO] 10.244.0.8:38281 - 16805 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000647057s
	[INFO] 10.244.0.8:51997 - 13985 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160466s
	[INFO] 10.244.0.8:51997 - 14298 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000220652s
	[INFO] 10.244.0.8:45076 - 61133 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125223s
	[INFO] 10.244.0.8:45076 - 60865 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000152664s
	[INFO] 10.244.0.8:36522 - 44178 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060404s
	[INFO] 10.244.0.8:36522 - 43995 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078705s
	[INFO] 10.244.0.8:59475 - 4219 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116054s
	[INFO] 10.244.0.8:59475 - 4422 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010261s
	[INFO] 10.244.0.23:44890 - 42394 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000390546s
	[INFO] 10.244.0.23:40413 - 38581 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001287022s
	[INFO] 10.244.0.23:48952 - 288 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001963576s
	[INFO] 10.244.0.23:45971 - 54062 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002169261s
	[INFO] 10.244.0.23:46787 - 19498 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139649s
	[INFO] 10.244.0.23:50609 - 21977 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067547s
	[INFO] 10.244.0.23:44756 - 29378 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.005330443s
	[INFO] 10.244.0.23:59657 - 39385 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005346106s
	[INFO] 10.244.0.27:42107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000463345s
	[INFO] 10.244.0.27:53096 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000254044s
	
	
	==> describe nodes <==
	Name:               addons-947553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-947553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-947553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-947553
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-947553"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:21:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-947553
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:29:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    addons-947553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ab490c5e4f046af88ecdee8117466b4
	  System UUID:                2ab490c5-e4f0-46af-88ec-dee8117466b4
	  Boot ID:                    1ea0245c-4d70-493b-9a36-f639a36dba5f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-6hpj8                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         7m56s
	  kube-system                 amd-gpu-device-plugin-sl95v                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 coredns-66bc5c9577-tpfkd                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m5s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 csi-hostpathplugin-xtf7r                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 etcd-addons-947553                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m10s
	  kube-system                 kube-apiserver-addons-947553                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-controller-manager-addons-947553                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-proxy-92nmr                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 kube-scheduler-addons-947553                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 snapshot-controller-7d9fbc56b8-7n9bg                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 snapshot-controller-7d9fbc56b8-944pl                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  local-path-storage          helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  local-path-storage          local-path-provisioner-648f6765c9-znfrl                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m3s   kube-proxy       
	  Normal  Starting                 8m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m10s  kubelet          Node addons-947553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m10s  kubelet          Node addons-947553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m10s  kubelet          Node addons-947553 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m9s   kubelet          Node addons-947553 status is now: NodeReady
	  Normal  RegisteredNode           8m6s   node-controller  Node addons-947553 event: Registered Node addons-947553 in Controller
	
	
	==> dmesg <==
	[  +0.754334] kauditd_printk_skb: 318 callbacks suppressed
	[Nov20 20:22] kauditd_printk_skb: 302 callbacks suppressed
	[  +3.551453] kauditd_printk_skb: 395 callbacks suppressed
	[  +6.168214] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.651247] kauditd_printk_skb: 17 callbacks suppressed
	[Nov20 20:23] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.679825] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.000053] kauditd_printk_skb: 157 callbacks suppressed
	[  +5.059481] kauditd_printk_skb: 109 callbacks suppressed
	[Nov20 20:24] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.445964] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.477031] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.089818] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:25] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.536974] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.509608] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000043] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:26] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.002720] kauditd_printk_skb: 50 callbacks suppressed
	[ +11.737417] kauditd_printk_skb: 103 callbacks suppressed
	[Nov20 20:27] kauditd_printk_skb: 15 callbacks suppressed
	[Nov20 20:28] kauditd_printk_skb: 21 callbacks suppressed
	[Nov20 20:29] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45] <==
	{"level":"info","ts":"2025-11-20T20:23:44.570260Z","caller":"traceutil/trace.go:172","msg":"trace[663488031] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1136; }","duration":"154.066668ms","start":"2025-11-20T20:23:44.416165Z","end":"2025-11-20T20:23:44.570231Z","steps":["trace[663488031] 'read index received'  (duration: 154.021094ms)","trace[663488031] 'applied index is now lower than readState.Index'  (duration: 44.411µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:23:44.570877Z","caller":"traceutil/trace.go:172","msg":"trace[715433296] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"233.967936ms","start":"2025-11-20T20:23:44.336900Z","end":"2025-11-20T20:23:44.570868Z","steps":["trace[715433296] 'process raft request'  (duration: 233.871288ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571611Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.483381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:23:44.571673Z","caller":"traceutil/trace.go:172","msg":"trace[884414279] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1098; }","duration":"111.548598ms","start":"2025-11-20T20:23:44.460117Z","end":"2025-11-20T20:23:44.571666Z","steps":["trace[884414279] 'agreement among raft nodes before linearized reading'  (duration: 111.465445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571061Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.869609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.80\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-11-20T20:23:44.571810Z","caller":"traceutil/trace.go:172","msg":"trace[1446846650] range","detail":"{range_begin:/registry/masterleases/192.168.39.80; range_end:; response_count:1; response_revision:1098; }","duration":"155.64428ms","start":"2025-11-20T20:23:44.416161Z","end":"2025-11-20T20:23:44.571805Z","steps":["trace[1446846650] 'agreement among raft nodes before linearized reading'  (duration: 154.810085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:23:46.528477Z","caller":"traceutil/trace.go:172","msg":"trace[982384876] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"154.809492ms","start":"2025-11-20T20:23:46.373650Z","end":"2025-11-20T20:23:46.528459Z","steps":["trace[982384876] 'process raft request'  (duration: 154.328485ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.123570Z","caller":"traceutil/trace.go:172","msg":"trace[1335763238] linearizableReadLoop","detail":"{readStateIndex:1253; appliedIndex:1253; }","duration":"134.10576ms","start":"2025-11-20T20:24:24.989438Z","end":"2025-11-20T20:24:25.123544Z","steps":["trace[1335763238] 'read index received'  (duration: 134.100119ms)","trace[1335763238] 'applied index is now lower than readState.Index'  (duration: 5.092µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:25.123838Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.381481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-11-20T20:24:25.123864Z","caller":"traceutil/trace.go:172","msg":"trace[1178674559] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1204; }","duration":"134.473479ms","start":"2025-11-20T20:24:24.989384Z","end":"2025-11-20T20:24:25.123857Z","steps":["trace[1178674559] 'agreement among raft nodes before linearized reading'  (duration: 134.302699ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:24:25.124126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.465459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:25.124145Z","caller":"traceutil/trace.go:172","msg":"trace[392254424] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1205; }","duration":"131.486967ms","start":"2025-11-20T20:24:24.992652Z","end":"2025-11-20T20:24:25.124139Z","steps":["trace[392254424] 'agreement among raft nodes before linearized reading'  (duration: 131.453666ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.124311Z","caller":"traceutil/trace.go:172","msg":"trace[1682962710] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"237.606056ms","start":"2025-11-20T20:24:24.886699Z","end":"2025-11-20T20:24:25.124305Z","steps":["trace[1682962710] 'process raft request'  (duration: 237.320378ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.314678Z","caller":"traceutil/trace.go:172","msg":"trace[1797119853] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1279; }","duration":"155.702658ms","start":"2025-11-20T20:24:29.158960Z","end":"2025-11-20T20:24:29.314662Z","steps":["trace[1797119853] 'read index received'  (duration: 155.696769ms)","trace[1797119853] 'applied index is now lower than readState.Index'  (duration: 4.683µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:29.314797Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.822209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:29.314815Z","caller":"traceutil/trace.go:172","msg":"trace[163313341] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1230; }","duration":"155.853309ms","start":"2025-11-20T20:24:29.158956Z","end":"2025-11-20T20:24:29.314809Z","steps":["trace[163313341] 'agreement among raft nodes before linearized reading'  (duration: 155.793828ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.315341Z","caller":"traceutil/trace.go:172","msg":"trace[932727743] transaction","detail":"{read_only:false; response_revision:1231; number_of_response:1; }","duration":"158.601334ms","start":"2025-11-20T20:24:29.156731Z","end":"2025-11-20T20:24:29.315333Z","steps":["trace[932727743] 'process raft request'  (duration: 158.264408ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.860975Z","caller":"traceutil/trace.go:172","msg":"trace[570114600] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"232.699788ms","start":"2025-11-20T20:24:38.628262Z","end":"2025-11-20T20:24:38.860962Z","steps":["trace[570114600] 'process raft request'  (duration: 232.584342ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.862428Z","caller":"traceutil/trace.go:172","msg":"trace[1632150606] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"194.825132ms","start":"2025-11-20T20:24:38.667594Z","end":"2025-11-20T20:24:38.862419Z","steps":["trace[1632150606] 'process raft request'  (duration: 194.764757ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:25:59.796917Z","caller":"traceutil/trace.go:172","msg":"trace[1018787678] transaction","detail":"{read_only:false; response_revision:1587; number_of_response:1; }","duration":"178.519957ms","start":"2025-11-20T20:25:59.618371Z","end":"2025-11-20T20:25:59.796891Z","steps":["trace[1018787678] 'process raft request'  (duration: 178.419059ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:26:07.306954Z","caller":"traceutil/trace.go:172","msg":"trace[1832150044] linearizableReadLoop","detail":"{readStateIndex:1696; appliedIndex:1696; }","duration":"207.161975ms","start":"2025-11-20T20:26:07.099774Z","end":"2025-11-20T20:26:07.306936Z","steps":["trace[1832150044] 'read index received'  (duration: 207.151183ms)","trace[1832150044] 'applied index is now lower than readState.Index'  (duration: 6.599µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:26:07.307088Z","caller":"traceutil/trace.go:172","msg":"trace[519307734] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"362.807072ms","start":"2025-11-20T20:26:06.944270Z","end":"2025-11-20T20:26:07.307077Z","steps":["trace[519307734] 'process raft request'  (duration: 362.695059ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:26:07.307192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.369314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3725"}
	{"level":"info","ts":"2025-11-20T20:26:07.307216Z","caller":"traceutil/trace.go:172","msg":"trace[875135275] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:1621; }","duration":"207.439279ms","start":"2025-11-20T20:26:07.099770Z","end":"2025-11-20T20:26:07.307209Z","steps":["trace[875135275] 'agreement among raft nodes before linearized reading'  (duration: 207.290795ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:26:07.307851Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:26:06.944254Z","time spent":"362.881173ms","remote":"127.0.0.1:35880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3014,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:1620 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:2970 >> failure:<request_range:<key:\"/registry/pods/default/registry-test\" > >"}
	
	
	==> kernel <==
	 20:29:55 up 8 min,  0 users,  load average: 0.41, 1.47, 0.97
	Linux addons-947553 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2] <==
	W1120 20:23:00.364766       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:00.364849       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 20:23:00.364867       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1120 20:23:00.365762       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:00.365790       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1120 20:23:00.366969       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1120 20:23:34.247008       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	W1120 20:23:34.253741       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:34.253819       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1120 20:23:34.256485       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	E1120 20:23:34.259388       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	E1120 20:23:34.271232       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	I1120 20:23:34.434058       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1120 20:24:45.470175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50698: use of closed network connection
	E1120 20:24:45.698946       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50724: use of closed network connection
	I1120 20:24:55.153735       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.73.86"}
	I1120 20:25:35.271669       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1120 20:26:07.917022       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1120 20:26:08.188570       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.64.46"}
	
	
	==> kube-controller-manager [3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be] <==
	I1120 20:21:49.551353       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:21:49.558938       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:21:49.560164       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:21:49.564482       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:21:49.572448       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:21:49.574897       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:21:49.579336       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:21:54.678834       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1120 20:21:58.672593       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1120 20:22:19.544397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:22:19.546674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1120 20:22:19.546720       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1120 20:22:19.600217       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1120 20:22:19.618675       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 20:22:19.646978       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:22:19.720013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1120 20:22:49.656241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:22:49.730478       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 20:23:19.661239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:23:19.740631       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 20:24:55.213061       1 replica_set.go:587] "Unhandled Error" err="sync \"headlamp/headlamp-6945c6f4d\" failed with pods \"headlamp-6945c6f4d-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I1120 20:24:58.991066       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1120 20:26:18.292121       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1120 20:26:30.134630       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I1120 20:28:38.989345       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf] <==
	I1120 20:21:51.944081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:21:52.047283       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:21:52.059178       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1120 20:21:52.063486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:21:52.317013       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:21:52.317608       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:21:52.319592       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:21:52.353676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:21:52.353988       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:21:52.354004       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:21:52.365989       1 config.go:200] "Starting service config controller"
	I1120 20:21:52.366010       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:21:52.373413       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:21:52.373476       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:21:52.373601       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:21:52.373606       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:21:52.404955       1 config.go:309] "Starting node config controller"
	I1120 20:21:52.405179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:21:52.405460       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:21:52.474183       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:21:52.474283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:21:52.570175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b] <==
	E1120 20:21:42.658146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:42.658289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:42.658479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:42.659065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:21:42.659191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:42.659355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:42.659676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:21:42.660629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:43.501696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:21:43.568808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:21:43.596853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:43.607731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:21:43.612970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:21:43.637766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:21:43.650165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:43.687838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:21:43.786838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:43.825959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:21:43.878175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:43.895745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:43.953162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:21:43.991210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:44.021889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:21:44.053100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 20:21:46.731200       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:29:04 addons-947553 kubelet[1518]: E1120 20:29:04.412564    1518 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 20 20:29:04 addons-947553 kubelet[1518]: E1120 20:29:04.412809    1518 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(3fabe4f4-d0a9-40fe-a635-e27af546a8ce): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:29:04 addons-947553 kubelet[1518]: E1120 20:29:04.412854    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="3fabe4f4-d0a9-40fe-a635-e27af546a8ce"
	Nov 20 20:29:05 addons-947553 kubelet[1518]: E1120 20:29:05.699981    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670545699630245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:05 addons-947553 kubelet[1518]: E1120 20:29:05.700025    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670545699630245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:06 addons-947553 kubelet[1518]: I1120 20:29:06.830734    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/59e2a5f4-8738-4c34-9c90-bda7cc2264b9-script\") pod \"helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a\" (UID: \"59e2a5f4-8738-4c34-9c90-bda7cc2264b9\") " pod="local-path-storage/helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a"
	Nov 20 20:29:06 addons-947553 kubelet[1518]: I1120 20:29:06.830829    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/59e2a5f4-8738-4c34-9c90-bda7cc2264b9-data\") pod \"helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a\" (UID: \"59e2a5f4-8738-4c34-9c90-bda7cc2264b9\") " pod="local-path-storage/helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a"
	Nov 20 20:29:06 addons-947553 kubelet[1518]: I1120 20:29:06.830858    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhnzh\" (UniqueName: \"kubernetes.io/projected/59e2a5f4-8738-4c34-9c90-bda7cc2264b9-kube-api-access-qhnzh\") pod \"helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a\" (UID: \"59e2a5f4-8738-4c34-9c90-bda7cc2264b9\") " pod="local-path-storage/helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a"
	Nov 20 20:29:15 addons-947553 kubelet[1518]: E1120 20:29:15.702768    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670555702315652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:15 addons-947553 kubelet[1518]: E1120 20:29:15.702790    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670555702315652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:19 addons-947553 kubelet[1518]: E1120 20:29:19.332038    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="3fabe4f4-d0a9-40fe-a635-e27af546a8ce"
	Nov 20 20:29:23 addons-947553 kubelet[1518]: I1120 20:29:23.330801    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:29:25 addons-947553 kubelet[1518]: E1120 20:29:25.706305    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670565705721324  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:25 addons-947553 kubelet[1518]: E1120 20:29:25.706357    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670565705721324  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:34 addons-947553 kubelet[1518]: E1120 20:29:34.510689    1518 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 20 20:29:34 addons-947553 kubelet[1518]: E1120 20:29:34.510758    1518 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 20 20:29:34 addons-947553 kubelet[1518]: E1120 20:29:34.511158    1518 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(261f896c-810b-4000-a18d-13ad1a4b0967): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:29:34 addons-947553 kubelet[1518]: E1120 20:29:34.511213    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="261f896c-810b-4000-a18d-13ad1a4b0967"
	Nov 20 20:29:35 addons-947553 kubelet[1518]: E1120 20:29:35.709753    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670575709143369  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:35 addons-947553 kubelet[1518]: E1120 20:29:35.709780    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670575709143369  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:45 addons-947553 kubelet[1518]: E1120 20:29:45.713149    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670585712712178  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:45 addons-947553 kubelet[1518]: E1120 20:29:45.713177    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670585712712178  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:49 addons-947553 kubelet[1518]: E1120 20:29:49.336274    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="261f896c-810b-4000-a18d-13ad1a4b0967"
	Nov 20 20:29:55 addons-947553 kubelet[1518]: E1120 20:29:55.715743    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670595715250113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:29:55 addons-947553 kubelet[1518]: E1120 20:29:55.715770    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670595715250113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	
	
	==> storage-provisioner [1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806] <==
	W1120 20:29:31.229021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:33.232653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:33.238023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:35.242149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:35.250736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:37.255955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:37.261739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:39.264724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:39.272409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:41.278776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:41.283765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:43.287706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:43.294171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:45.297274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:45.302248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:47.307139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:47.314653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:49.318805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:49.325538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:51.329443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:51.337349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:53.341260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:53.350620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:55.355711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:29:55.363736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-947553 -n addons-947553
helpers_test.go:269: (dbg) Run:  kubectl --context addons-947553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a: exit status 1 (84.121161ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:26:08 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8bvn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s8bvn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m48s                 default-scheduler  Successfully assigned default/nginx to addons-947553
	  Normal   Pulling    2m8s (x2 over 3m48s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     22s (x2 over 2m22s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     22s (x2 over 2m22s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x2 over 2m22s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7s (x2 over 2m22s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:25:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw89l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mw89l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m27s                default-scheduler  Successfully assigned default/task-pv-pod to addons-947553
	  Warning  Failed     52s (x2 over 3m23s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     52s (x2 over 3m23s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    37s (x2 over 3m22s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     37s (x2 over 3m22s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    25s (x3 over 4m27s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7w87 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-w7w87:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-whk72" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xqmtg" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.793674968s)
--- FAIL: TestAddons/parallel/LocalPath (345.10s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (128.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-nqz6v" [3ac69ce1-c8e4-478b-bc45-5b450445f539] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
helpers_test.go:337: TestAddons/parallel/Yakd: WARNING: pod list for "yakd-dashboard" "app.kubernetes.io/name=yakd-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-947553 -n addons-947553
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-11-20 20:28:26.135078771 +0000 UTC m=+455.021367598
addons_test.go:1047: (dbg) Run:  kubectl --context addons-947553 describe po yakd-dashboard-5ff678cb9-nqz6v -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-947553 describe po yakd-dashboard-5ff678cb9-nqz6v -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-nqz6v
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-947553/192.168.39.80
Start Time:       Thu, 20 Nov 2025 20:21:59 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-nqz6v (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdnff (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rdnff:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m27s                  default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-nqz6v to addons-947553
Warning  Failed     4m57s                  kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": fetching target platform image selected from image index: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m25s (x4 over 6m24s)  kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     82s (x4 over 4m57s)    kubelet            Error: ErrImagePull
Warning  Failed     82s (x3 over 4m10s)    kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    6s (x11 over 4m56s)    kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     6s (x11 over 4m56s)    kubelet            Error: ImagePullBackOff
addons_test.go:1047: (dbg) Run:  kubectl --context addons-947553 logs yakd-dashboard-5ff678cb9-nqz6v -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-947553 logs yakd-dashboard-5ff678cb9-nqz6v -n yakd-dashboard: exit status 1 (76.941767ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-nqz6v" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-947553 logs yakd-dashboard-5ff678cb9-nqz6v -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-947553 -n addons-947553
helpers_test.go:252: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 logs -n 25: (1.254794515s)
helpers_test.go:260: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-838975 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │ 20 Nov 25 20:20 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ -o=json --download-only -p download-only-948147 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ delete  │ -p download-only-948147                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ --download-only -p binary-mirror-717684 --alsologtostderr --binary-mirror http://127.0.0.1:46607 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ delete  │ -p binary-mirror-717684                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-717684 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ addons  │ disable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ addons  │ enable dashboard -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	│ start   │ -p addons-947553 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ enable headlamp -p addons-947553 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:24 UTC │ 20 Nov 25 20:24 UTC │
	│ addons  │ addons-947553 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ addons  │ addons-947553 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:25 UTC │ 20 Nov 25 20:25 UTC │
	│ ip      │ addons-947553 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-947553                                                                                                                                                                                                                                                                                                                                                                                         │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	│ addons  │ addons-947553 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-947553        │ jenkins │ v1.37.0 │ 20 Nov 25 20:26 UTC │ 20 Nov 25 20:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:04.799759    8315 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:04.799869    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.799880    8315 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:04.799886    8315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:04.800101    8315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:21:04.800589    8315 out.go:368] Setting JSON to false
	I1120 20:21:04.801389    8315 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":215,"bootTime":1763669850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:21:04.801502    8315 start.go:143] virtualization: kvm guest
	I1120 20:21:04.803491    8315 out.go:179] * [addons-947553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:21:04.804816    8315 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:21:04.804809    8315 notify.go:221] Checking for updates...
	I1120 20:21:04.807406    8315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:04.808794    8315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:04.810101    8315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:04.811420    8315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:21:04.812487    8315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:21:04.813679    8315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:21:04.845057    8315 out.go:179] * Using the kvm2 driver based on user configuration
	I1120 20:21:04.846216    8315 start.go:309] selected driver: kvm2
	I1120 20:21:04.846231    8315 start.go:930] validating driver "kvm2" against <nil>
	I1120 20:21:04.846241    8315 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:21:04.846961    8315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:21:04.847180    8315 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:21:04.847211    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:04.847249    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:04.847263    8315 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1120 20:21:04.847320    8315 start.go:353] cluster config:
	{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1120 20:21:04.847407    8315 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:21:04.848659    8315 out.go:179] * Starting "addons-947553" primary control-plane node in "addons-947553" cluster
	I1120 20:21:04.849659    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:04.849691    8315 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:21:04.849701    8315 cache.go:65] Caching tarball of preloaded images
	I1120 20:21:04.849792    8315 preload.go:238] Found /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:21:04.849803    8315 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:21:04.850086    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:04.850110    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json: {Name:mk61841fddacaf75a98d91c699b32f9aeeaf9c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:04.850231    8315 start.go:360] acquireMachinesLock for addons-947553: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 20:21:04.850284    8315 start.go:364] duration metric: took 40.752µs to acquireMachinesLock for "addons-947553"
	I1120 20:21:04.850302    8315 start.go:93] Provisioning new machine with config: &{Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:04.850352    8315 start.go:125] createHost starting for "" (driver="kvm2")
	I1120 20:21:04.852328    8315 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1120 20:21:04.852480    8315 start.go:159] libmachine.API.Create for "addons-947553" (driver="kvm2")
	I1120 20:21:04.852506    8315 client.go:173] LocalClient.Create starting
	I1120 20:21:04.852580    8315 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem
	I1120 20:21:05.105122    8315 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem
	I1120 20:21:05.182169    8315 main.go:143] libmachine: creating domain...
	I1120 20:21:05.182188    8315 main.go:143] libmachine: creating network...
	I1120 20:21:05.183682    8315 main.go:143] libmachine: found existing default network
	I1120 20:21:05.183926    8315 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.184462    8315 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d98350}
	I1120 20:21:05.184549    8315 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-947553</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.190086    8315 main.go:143] libmachine: creating private network mk-addons-947553 192.168.39.0/24...
	I1120 20:21:05.255182    8315 main.go:143] libmachine: private network mk-addons-947553 192.168.39.0/24 created
	I1120 20:21:05.255605    8315 main.go:143] libmachine: <network>
	  <name>mk-addons-947553</name>
	  <uuid>aa8efef2-a4fa-46da-99ec-8e728046a9cf</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:9d:8a:68'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1120 20:21:05.255642    8315 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.255667    8315 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1120 20:21:05.255686    8315 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.255775    8315 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21923-3793/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1120 20:21:05.515325    8315 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa...
	I1120 20:21:05.718020    8315 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk...
	I1120 20:21:05.718065    8315 main.go:143] libmachine: Writing magic tar header
	I1120 20:21:05.718104    8315 main.go:143] libmachine: Writing SSH key tar header
	I1120 20:21:05.718203    8315 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 ...
	I1120 20:21:05.718284    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553
	I1120 20:21:05.718335    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553 (perms=drwx------)
	I1120 20:21:05.718363    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube/machines
	I1120 20:21:05.718383    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube/machines (perms=drwxr-xr-x)
	I1120 20:21:05.718404    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:05.718421    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793/.minikube (perms=drwxr-xr-x)
	I1120 20:21:05.718438    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21923-3793
	I1120 20:21:05.718456    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21923-3793 (perms=drwxrwxr-x)
	I1120 20:21:05.718473    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1120 20:21:05.718490    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1120 20:21:05.718505    8315 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1120 20:21:05.718521    8315 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1120 20:21:05.718536    8315 main.go:143] libmachine: checking permissions on dir: /home
	I1120 20:21:05.718549    8315 main.go:143] libmachine: skipping /home - not owner
	I1120 20:21:05.718557    8315 main.go:143] libmachine: defining domain...
	I1120 20:21:05.719886    8315 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:05.727760    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:79:1f:b5 in network default
	I1120 20:21:05.728410    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:05.728434    8315 main.go:143] libmachine: starting domain...
	I1120 20:21:05.728441    8315 main.go:143] libmachine: ensuring networks are active...
	I1120 20:21:05.729136    8315 main.go:143] libmachine: Ensuring network default is active
	I1120 20:21:05.729504    8315 main.go:143] libmachine: Ensuring network mk-addons-947553 is active
	I1120 20:21:05.730087    8315 main.go:143] libmachine: getting domain XML...
	I1120 20:21:05.731121    8315 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-947553</name>
	  <uuid>2ab490c5-e4f0-46af-88ec-dee8117466b4</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/addons-947553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:a7:2c'/>
	      <source network='mk-addons-947553'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:79:1f:b5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1120 20:21:07.012614    8315 main.go:143] libmachine: waiting for domain to start...
	I1120 20:21:07.013937    8315 main.go:143] libmachine: domain is now running
	I1120 20:21:07.013958    8315 main.go:143] libmachine: waiting for IP...
	I1120 20:21:07.014713    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.015361    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.015380    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.015661    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.015708    8315 retry.go:31] will retry after 270.684091ms: waiting for domain to come up
	I1120 20:21:07.288186    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.288839    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.288865    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.289198    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.289247    8315 retry.go:31] will retry after 384.258097ms: waiting for domain to come up
	I1120 20:21:07.674731    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:07.675347    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:07.675362    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:07.675602    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:07.675642    8315 retry.go:31] will retry after 325.268494ms: waiting for domain to come up
	I1120 20:21:08.002089    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.002712    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.002729    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.003011    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.003044    8315 retry.go:31] will retry after 532.953777ms: waiting for domain to come up
	I1120 20:21:08.537708    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:08.538539    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:08.538554    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:08.538839    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:08.538878    8315 retry.go:31] will retry after 671.32775ms: waiting for domain to come up
	I1120 20:21:09.212032    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.212741    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.212765    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.213102    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.213142    8315 retry.go:31] will retry after 640.716702ms: waiting for domain to come up
	I1120 20:21:09.855420    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:09.856063    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:09.856083    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:09.856391    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:09.856428    8315 retry.go:31] will retry after 715.495515ms: waiting for domain to come up
	I1120 20:21:10.573053    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:10.573668    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:10.573685    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:10.574006    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:10.574049    8315 retry.go:31] will retry after 1.386473849s: waiting for domain to come up
	I1120 20:21:11.962706    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:11.963438    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:11.963454    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:11.963745    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:11.963779    8315 retry.go:31] will retry after 1.671471747s: waiting for domain to come up
	I1120 20:21:13.637832    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:13.638601    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:13.638620    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:13.639009    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:13.639040    8315 retry.go:31] will retry after 1.524844768s: waiting for domain to come up
	I1120 20:21:15.165792    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:15.166517    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:15.166555    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:15.166908    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:15.166949    8315 retry.go:31] will retry after 2.171556586s: waiting for domain to come up
	I1120 20:21:17.341326    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:17.341989    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:17.342008    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:17.342371    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:17.342408    8315 retry.go:31] will retry after 2.613437366s: waiting for domain to come up
	I1120 20:21:19.957329    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:19.958097    8315 main.go:143] libmachine: no network interface addresses found for domain addons-947553 (source=lease)
	I1120 20:21:19.958115    8315 main.go:143] libmachine: trying to list again with source=arp
	I1120 20:21:19.958466    8315 main.go:143] libmachine: unable to find current IP address of domain addons-947553 in network mk-addons-947553 (interfaces detected: [])
	I1120 20:21:19.958501    8315 retry.go:31] will retry after 4.105323605s: waiting for domain to come up
	I1120 20:21:24.068938    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069767    8315 main.go:143] libmachine: domain addons-947553 has current primary IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.069790    8315 main.go:143] libmachine: found domain IP: 192.168.39.80
	I1120 20:21:24.069802    8315 main.go:143] libmachine: reserving static IP address...
	I1120 20:21:24.070350    8315 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-947553", mac: "52:54:00:7b:a7:2c", ip: "192.168.39.80"} in network mk-addons-947553
	I1120 20:21:24.251658    8315 main.go:143] libmachine: reserved static IP address 192.168.39.80 for domain addons-947553
	I1120 20:21:24.251676    8315 main.go:143] libmachine: waiting for SSH...
	I1120 20:21:24.251682    8315 main.go:143] libmachine: Getting to WaitForSSH function...
	I1120 20:21:24.254839    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255480    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.255507    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.255698    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.255932    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.255946    8315 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1120 20:21:24.357511    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.357947    8315 main.go:143] libmachine: domain creation complete
	I1120 20:21:24.359373    8315 machine.go:94] provisionDockerMachine start ...
	I1120 20:21:24.361503    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.361927    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.361949    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.362121    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.362368    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.362381    8315 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:21:24.462018    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1120 20:21:24.462045    8315 buildroot.go:166] provisioning hostname "addons-947553"
	I1120 20:21:24.464884    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465302    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.465327    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.465556    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.465783    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.465796    8315 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-947553 && echo "addons-947553" | sudo tee /etc/hostname
	I1120 20:21:24.590591    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-947553
	
	I1120 20:21:24.593332    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593716    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.593739    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.593959    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:24.594201    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:24.594220    8315 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-947553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-947553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-947553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:21:24.704349    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:21:24.704375    8315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 20:21:24.704425    8315 buildroot.go:174] setting up certificates
	I1120 20:21:24.704437    8315 provision.go:84] configureAuth start
	I1120 20:21:24.707018    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.707382    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.707405    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709518    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709819    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.709844    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.709960    8315 provision.go:143] copyHostCerts
	I1120 20:21:24.710021    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 20:21:24.710131    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 20:21:24.710204    8315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 20:21:24.710279    8315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.addons-947553 san=[127.0.0.1 192.168.39.80 addons-947553 localhost minikube]
	I1120 20:21:24.868893    8315 provision.go:177] copyRemoteCerts
	I1120 20:21:24.868955    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:21:24.871421    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.871778    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:24.871813    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:24.872001    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:24.954555    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:21:24.986020    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:21:25.016669    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:21:25.046712    8315 provision.go:87] duration metric: took 342.262806ms to configureAuth
	I1120 20:21:25.046739    8315 buildroot.go:189] setting minikube options for container-runtime
	I1120 20:21:25.046974    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:25.049642    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050132    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.050155    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.050331    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.050555    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.050571    8315 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:21:25.295480    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:21:25.295505    8315 machine.go:97] duration metric: took 936.115627ms to provisionDockerMachine
	I1120 20:21:25.295517    8315 client.go:176] duration metric: took 20.443004703s to LocalClient.Create
	I1120 20:21:25.295530    8315 start.go:167] duration metric: took 20.443049547s to libmachine.API.Create "addons-947553"
	I1120 20:21:25.295539    8315 start.go:293] postStartSetup for "addons-947553" (driver="kvm2")
	I1120 20:21:25.295551    8315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:21:25.295599    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:21:25.298453    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.298889    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.298912    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.299118    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.380706    8315 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:21:25.386067    8315 info.go:137] Remote host: Buildroot 2025.02
	I1120 20:21:25.386096    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 20:21:25.386163    8315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 20:21:25.386186    8315 start.go:296] duration metric: took 90.641008ms for postStartSetup
	I1120 20:21:25.389037    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389412    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.389432    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.389654    8315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/config.json ...
	I1120 20:21:25.389819    8315 start.go:128] duration metric: took 20.539459484s to createHost
	I1120 20:21:25.392104    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392481    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.392504    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.392693    8315 main.go:143] libmachine: Using SSH client type: native
	I1120 20:21:25.392952    8315 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I1120 20:21:25.392965    8315 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 20:21:25.493567    8315 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763670085.456620738
	
	I1120 20:21:25.493591    8315 fix.go:216] guest clock: 1763670085.456620738
	I1120 20:21:25.493598    8315 fix.go:229] Guest: 2025-11-20 20:21:25.456620738 +0000 UTC Remote: 2025-11-20 20:21:25.389830223 +0000 UTC m=+20.636741018 (delta=66.790515ms)
	I1120 20:21:25.493614    8315 fix.go:200] guest clock delta is within tolerance: 66.790515ms
	I1120 20:21:25.493618    8315 start.go:83] releasing machines lock for "addons-947553", held for 20.643324737s
	I1120 20:21:25.496394    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.496731    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.496750    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.497416    8315 ssh_runner.go:195] Run: cat /version.json
	I1120 20:21:25.497480    8315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:21:25.500666    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.500828    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501105    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501135    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501175    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:25.501196    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:25.501333    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.501488    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:25.605393    8315 ssh_runner.go:195] Run: systemctl --version
	I1120 20:21:25.612006    8315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:21:25.772800    8315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:21:25.780223    8315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:21:25.780282    8315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:21:25.801102    8315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:21:25.801129    8315 start.go:496] detecting cgroup driver to use...
	I1120 20:21:25.801204    8315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:21:25.821353    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:21:25.843177    8315 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:21:25.843231    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:21:25.868522    8315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:21:25.885911    8315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:21:26.035325    8315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:21:26.252665    8315 docker.go:234] disabling docker service ...
	I1120 20:21:26.252745    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:21:26.269964    8315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:21:26.285883    8315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:21:26.444730    8315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:21:26.588236    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:21:26.605731    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:21:26.631197    8315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:21:26.631278    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.644989    8315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 20:21:26.645074    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.659053    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.672870    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.687322    8315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:21:26.702284    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.716913    8315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.738871    8315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:21:26.752362    8315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:21:26.763831    8315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1120 20:21:26.763912    8315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1120 20:21:26.789002    8315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:21:26.803924    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:26.952317    8315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:21:27.200343    8315 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:21:27.200435    8315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:21:27.206384    8315 start.go:564] Will wait 60s for crictl version
	I1120 20:21:27.206464    8315 ssh_runner.go:195] Run: which crictl
	I1120 20:21:27.211256    8315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 20:21:27.250686    8315 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 20:21:27.250789    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.281244    8315 ssh_runner.go:195] Run: crio --version
	I1120 20:21:27.453589    8315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1120 20:21:27.519790    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520199    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:27.520222    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:27.520413    8315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1120 20:21:27.525676    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:27.542910    8315 kubeadm.go:884] updating cluster {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:21:27.543059    8315 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:21:27.543129    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:27.574818    8315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 20:21:27.574926    8315 ssh_runner.go:195] Run: which lz4
	I1120 20:21:27.580276    8315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1120 20:21:27.587089    8315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1120 20:21:27.587120    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1120 20:21:29.151749    8315 crio.go:462] duration metric: took 1.571528535s to copy over tarball
	I1120 20:21:29.151825    8315 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1120 20:21:30.840010    8315 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.688159594s)
	I1120 20:21:30.840053    8315 crio.go:469] duration metric: took 1.688277204s to extract the tarball
	I1120 20:21:30.840061    8315 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1120 20:21:30.882678    8315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:21:30.922657    8315 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:21:30.922680    8315 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:21:30.922687    8315 kubeadm.go:935] updating node { 192.168.39.80 8443 v1.34.1 crio true true} ...
	I1120 20:21:30.922783    8315 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-947553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:21:30.922874    8315 ssh_runner.go:195] Run: crio config
	I1120 20:21:30.970750    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:30.970771    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:30.970787    8315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:21:30.970807    8315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-947553 NodeName:addons-947553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:21:30.970921    8315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-947553"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.80"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:21:30.970978    8315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:21:30.984115    8315 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:21:30.984179    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:21:30.997000    8315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1120 20:21:31.019490    8315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:21:31.040334    8315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1120 20:21:31.062447    8315 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I1120 20:21:31.066873    8315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:21:31.082252    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:31.225462    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:31.260197    8315 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553 for IP: 192.168.39.80
	I1120 20:21:31.260217    8315 certs.go:195] generating shared ca certs ...
	I1120 20:21:31.260232    8315 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.260386    8315 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 20:21:31.565609    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt ...
	I1120 20:21:31.565637    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt: {Name:mkbaf0e14aa61a2ff1b23e3cacd2c256e32e6647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565863    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key ...
	I1120 20:21:31.565878    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key: {Name:mk6aeca1c4b3f3e4ff969d4a1bc1fecc4b0fa343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:31.565998    8315 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 20:21:32.272316    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt ...
	I1120 20:21:32.272345    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt: {Name:mk6e855dc2ded0db05a3455c6e851abbeb92043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272564    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key ...
	I1120 20:21:32.272590    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key: {Name:mkc4fdf928a4209309cd887410d07a4fb9cad8d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.272702    8315 certs.go:257] generating profile certs ...
	I1120 20:21:32.272778    8315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key
	I1120 20:21:32.272805    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt with IP's: []
	I1120 20:21:32.531299    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt ...
	I1120 20:21:32.531330    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: {Name:mkacef1d43c5fe9ffb1d09b61b8a2a7db2cf094d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531547    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key ...
	I1120 20:21:32.531568    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.key: {Name:mk2cb4e6b2267fb750aa726a4e65ebdfb9212cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.531675    8315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2
	I1120 20:21:32.531704    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.80]
	I1120 20:21:32.818886    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 ...
	I1120 20:21:32.818915    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2: {Name:mk790b39b3d9776066f9b6fb58232a0c1fea8994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819086    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 ...
	I1120 20:21:32.819099    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2: {Name:mk4563c621ceba8c563d34ed8d2ea6985bc21d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:32.819174    8315 certs.go:382] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt
	I1120 20:21:32.819257    8315 certs.go:386] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key.70d8b8d2 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key
	I1120 20:21:32.819305    8315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key
	I1120 20:21:32.819322    8315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt with IP's: []
	I1120 20:21:33.229266    8315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt ...
	I1120 20:21:33.229303    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt: {Name:mk842c9b1c7d59553f9e9969540d37e3f124f603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229499    8315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key ...
	I1120 20:21:33.229519    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key: {Name:mk774bcb76c9d8c8959c52bd40c6db81e671bce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:33.229746    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 20:21:33.229789    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:21:33.229825    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:21:33.229867    8315 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 20:21:33.230425    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:21:33.262117    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:21:33.298274    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:21:33.335705    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:21:33.369053    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 20:21:33.401973    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:21:33.434941    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:21:33.467052    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:21:33.499463    8315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:21:33.533326    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:21:33.557271    8315 ssh_runner.go:195] Run: openssl version
	I1120 20:21:33.565199    8315 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.579252    8315 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:21:33.592359    8315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598287    8315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.598357    8315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:21:33.606765    8315 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:21:33.620434    8315 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:21:33.633673    8315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:21:33.639557    8315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:21:33.639640    8315 kubeadm.go:401] StartCluster: {Name:addons-947553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-947553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:21:33.639719    8315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:21:33.639785    8315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:21:33.678141    8315 cri.go:89] found id: ""
	I1120 20:21:33.678230    8315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:21:33.692525    8315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:21:33.705815    8315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:21:33.718541    8315 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:21:33.718560    8315 kubeadm.go:158] found existing configuration files:
	
	I1120 20:21:33.718602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:21:33.730012    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:21:33.730084    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:21:33.742602    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:21:33.754750    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:21:33.754833    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:21:33.773694    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.789522    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:21:33.789573    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:21:33.803646    8315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:21:33.817663    8315 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:21:33.817714    8315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:21:33.830895    8315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1120 20:21:34.010421    8315 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:21:45.965962    8315 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:21:45.966043    8315 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:21:45.966134    8315 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:21:45.966274    8315 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:21:45.966402    8315 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:21:45.966485    8315 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:21:45.968313    8315 out.go:252]   - Generating certificates and keys ...
	I1120 20:21:45.968415    8315 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:21:45.968512    8315 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:21:45.968625    8315 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:21:45.968701    8315 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:21:45.968754    8315 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:21:45.968819    8315 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:21:45.968913    8315 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:21:45.969101    8315 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969192    8315 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:21:45.969314    8315 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-947553 localhost] and IPs [192.168.39.80 127.0.0.1 ::1]
	I1120 20:21:45.969371    8315 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:21:45.969421    8315 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:21:45.969458    8315 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:21:45.969504    8315 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:21:45.969545    8315 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:21:45.969595    8315 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:21:45.969637    8315 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:21:45.969697    8315 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:21:45.969754    8315 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:21:45.969823    8315 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:21:45.969888    8315 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:21:45.971245    8315 out.go:252]   - Booting up control plane ...
	I1120 20:21:45.971330    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:21:45.971396    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:21:45.971453    8315 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:21:45.971554    8315 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:21:45.971660    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:21:45.971754    8315 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:21:45.971826    8315 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:21:45.971880    8315 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:21:45.972014    8315 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:21:45.972174    8315 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:21:45.972260    8315 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.915384ms
	I1120 20:21:45.972339    8315 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:21:45.972417    8315 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.80:8443/livez
	I1120 20:21:45.972499    8315 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:21:45.972565    8315 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:21:45.972626    8315 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.009474334s
	I1120 20:21:45.972680    8315 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.600510793s
	I1120 20:21:45.972745    8315 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502310178s
	I1120 20:21:45.972837    8315 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:21:45.972964    8315 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:21:45.973026    8315 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:21:45.973213    8315 kubeadm.go:319] [mark-control-plane] Marking the node addons-947553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:21:45.973262    8315 kubeadm.go:319] [bootstrap-token] Using token: 2xpoj0.3iafwcplk6gzssxo
	I1120 20:21:45.975478    8315 out.go:252]   - Configuring RBAC rules ...
	I1120 20:21:45.975637    8315 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:21:45.975749    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:21:45.975873    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:21:45.975991    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:21:45.976087    8315 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:21:45.976159    8315 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:21:45.976260    8315 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:21:45.976297    8315 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:21:45.976339    8315 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:21:45.976345    8315 kubeadm.go:319] 
	I1120 20:21:45.976416    8315 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:21:45.976432    8315 kubeadm.go:319] 
	I1120 20:21:45.976492    8315 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:21:45.976498    8315 kubeadm.go:319] 
	I1120 20:21:45.976524    8315 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:21:45.976573    8315 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:21:45.976612    8315 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:21:45.976618    8315 kubeadm.go:319] 
	I1120 20:21:45.976662    8315 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:21:45.976669    8315 kubeadm.go:319] 
	I1120 20:21:45.976708    8315 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:21:45.976716    8315 kubeadm.go:319] 
	I1120 20:21:45.976761    8315 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:21:45.976832    8315 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:21:45.976903    8315 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:21:45.976909    8315 kubeadm.go:319] 
	I1120 20:21:45.976975    8315 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:21:45.977039    8315 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:21:45.977046    8315 kubeadm.go:319] 
	I1120 20:21:45.977115    8315 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977197    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 \
	I1120 20:21:45.977222    8315 kubeadm.go:319] 	--control-plane 
	I1120 20:21:45.977228    8315 kubeadm.go:319] 
	I1120 20:21:45.977318    8315 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:21:45.977332    8315 kubeadm.go:319] 
	I1120 20:21:45.977426    8315 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2xpoj0.3iafwcplk6gzssxo \
	I1120 20:21:45.977559    8315 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 
	I1120 20:21:45.977570    8315 cni.go:84] Creating CNI manager for ""
	I1120 20:21:45.977577    8315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:21:45.978905    8315 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1120 20:21:45.980206    8315 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1120 20:21:45.998278    8315 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1120 20:21:46.024557    8315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:21:46.024640    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.024705    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-947553 minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-947553 minikube.k8s.io/primary=true
	I1120 20:21:46.163608    8315 ops.go:34] apiserver oom_adj: -16
	I1120 20:21:46.163786    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:46.664084    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.164553    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:47.664473    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.164635    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:48.664221    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.163942    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:49.663901    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.164591    8315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:21:50.290234    8315 kubeadm.go:1114] duration metric: took 4.265649758s to wait for elevateKubeSystemPrivileges
	I1120 20:21:50.290282    8315 kubeadm.go:403] duration metric: took 16.650648707s to StartCluster
	I1120 20:21:50.290306    8315 settings.go:142] acquiring lock: {Name:mke92973c8f33ef32fe11f7b266adf74cd3ec47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.290453    8315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:50.290990    8315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:21:50.291268    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:21:50.291283    8315 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:21:50.291344    8315 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 20:21:50.291469    8315 addons.go:70] Setting gcp-auth=true in profile "addons-947553"
	I1120 20:21:50.291484    8315 addons.go:70] Setting ingress=true in profile "addons-947553"
	I1120 20:21:50.291498    8315 mustload.go:66] Loading cluster: addons-947553
	I1120 20:21:50.291500    8315 addons.go:239] Setting addon ingress=true in "addons-947553"
	I1120 20:21:50.291494    8315 addons.go:70] Setting cloud-spanner=true in profile "addons-947553"
	I1120 20:21:50.291519    8315 addons.go:239] Setting addon cloud-spanner=true in "addons-947553"
	I1120 20:21:50.291525    8315 addons.go:70] Setting registry=true in profile "addons-947553"
	I1120 20:21:50.291542    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291555    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291554    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291565    8315 addons.go:239] Setting addon registry=true in "addons-947553"
	I1120 20:21:50.291594    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291595    8315 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.291607    8315 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-947553"
	I1120 20:21:50.291627    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.291692    8315 config.go:182] Loaded profile config "addons-947553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:21:50.291474    8315 addons.go:70] Setting yakd=true in profile "addons-947553"
	I1120 20:21:50.292160    8315 addons.go:239] Setting addon yakd=true in "addons-947553"
	I1120 20:21:50.292192    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292250    8315 addons.go:70] Setting inspektor-gadget=true in profile "addons-947553"
	I1120 20:21:50.292272    8315 addons.go:239] Setting addon inspektor-gadget=true in "addons-947553"
	I1120 20:21:50.292297    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292485    8315 addons.go:70] Setting ingress-dns=true in profile "addons-947553"
	I1120 20:21:50.292520    8315 addons.go:239] Setting addon ingress-dns=true in "addons-947553"
	I1120 20:21:50.292545    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292621    8315 addons.go:70] Setting registry-creds=true in profile "addons-947553"
	I1120 20:21:50.292644    8315 addons.go:239] Setting addon registry-creds=true in "addons-947553"
	I1120 20:21:50.292671    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292677    8315 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-947553"
	I1120 20:21:50.292719    8315 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:21:50.292755    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.292807    8315 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-947553"
	I1120 20:21:50.292829    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-947553"
	I1120 20:21:50.292880    8315 addons.go:70] Setting metrics-server=true in profile "addons-947553"
	I1120 20:21:50.292897    8315 addons.go:239] Setting addon metrics-server=true in "addons-947553"
	I1120 20:21:50.292922    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293069    8315 out.go:179] * Verifying Kubernetes components...
	I1120 20:21:50.293281    8315 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-947553"
	I1120 20:21:50.293300    8315 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-947553"
	I1120 20:21:50.293321    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293536    8315 addons.go:70] Setting default-storageclass=true in profile "addons-947553"
	I1120 20:21:50.293556    8315 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-947553"
	I1120 20:21:50.293573    8315 addons.go:70] Setting storage-provisioner=true in profile "addons-947553"
	I1120 20:21:50.293591    8315 addons.go:239] Setting addon storage-provisioner=true in "addons-947553"
	I1120 20:21:50.293613    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.293979    8315 addons.go:70] Setting volcano=true in profile "addons-947553"
	I1120 20:21:50.294002    8315 addons.go:239] Setting addon volcano=true in "addons-947553"
	I1120 20:21:50.294026    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294103    8315 addons.go:70] Setting volumesnapshots=true in profile "addons-947553"
	I1120 20:21:50.294122    8315 addons.go:239] Setting addon volumesnapshots=true in "addons-947553"
	I1120 20:21:50.294146    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.294465    8315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:21:50.297973    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.299952    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 20:21:50.299964    8315 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 20:21:50.300060    8315 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 20:21:50.300093    8315 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 20:21:50.299977    8315 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 20:21:50.301985    8315 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-947553"
	I1120 20:21:50.302030    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.302603    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 20:21:50.303185    8315 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 20:21:50.302631    8315 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:50.303261    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	W1120 20:21:50.302916    8315 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 20:21:50.303040    8315 addons.go:239] Setting addon default-storageclass=true in "addons-947553"
	I1120 20:21:50.303355    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:50.303953    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 20:21:50.303969    8315 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 20:21:50.303973    8315 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 20:21:50.303953    8315 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 20:21:50.304024    8315 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:50.305543    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 20:21:50.304040    8315 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 20:21:50.304099    8315 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 20:21:50.305800    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 20:21:50.304918    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.304913    8315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:21:50.305899    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:50.307319    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 20:21:50.306014    8315 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:50.307351    8315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:21:50.307429    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.307470    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 20:21:50.307480    8315 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1120 20:21:50.306784    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:21:50.307511    8315 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:21:50.306817    8315 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 20:21:50.307620    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.306822    8315 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:50.307695    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 20:21:50.307706    8315 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:50.307716    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 20:21:50.306909    8315 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:50.308092    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 20:21:50.308474    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 20:21:50.308512    8315 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 20:21:50.308524    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 20:21:50.308827    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.308882    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309172    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.309208    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.309325    8315 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 20:21:50.309319    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.309343    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:50.309353    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:21:50.309929    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.310172    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:21:50.311742    8315 out.go:179]   - Using image docker.io/busybox:stable
	I1120 20:21:50.311746    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 20:21:50.311894    8315 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:50.311914    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 20:21:50.313106    8315 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:50.313128    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 20:21:50.314097    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.314587    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 20:21:50.315478    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.315516    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.316257    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.316610    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 20:21:50.317131    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.317791    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318124    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318489    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.318521    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.318877    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.319057    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319200    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319245    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 20:21:50.319767    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319780    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.319803    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.319808    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320039    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320130    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320260    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.320721    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.320726    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321176    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321210    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321308    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321337    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321371    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321267    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321416    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321437    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321401    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.321692    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321834    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 20:21:50.321903    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.321928    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.321951    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322097    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322416    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322441    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.322690    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.322712    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.322755    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323004    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323171    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.323197    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.323359    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.323763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324196    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.324226    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.324375    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:50.324536    8315 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 20:21:50.325593    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 20:21:50.325607    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 20:21:50.328078    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328534    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:50.328557    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:50.328735    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	W1120 20:21:50.476524    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.476558    8315 retry.go:31] will retry after 236.913044ms: ssh: handshake failed: read tcp 192.168.39.1:44962->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513415    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513438    8315 retry.go:31] will retry after 367.013463ms: ssh: handshake failed: read tcp 192.168.39.1:44984->192.168.39.80:22: read: connection reset by peer
	W1120 20:21:50.513646    8315 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.513672    8315 retry.go:31] will retry after 332.960576ms: ssh: handshake failed: read tcp 192.168.39.1:44998->192.168.39.80:22: read: connection reset by peer
	I1120 20:21:50.932554    8315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:21:50.932720    8315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:21:51.133049    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 20:21:51.144339    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:21:51.194458    8315 node_ready.go:35] waiting up to 6m0s for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206010    8315 node_ready.go:49] node "addons-947553" is "Ready"
	I1120 20:21:51.206043    8315 node_ready.go:38] duration metric: took 11.547378ms for node "addons-947553" to be "Ready" ...
	I1120 20:21:51.206057    8315 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:21:51.206112    8315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:21:51.317342    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:21:51.364561    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:21:51.396520    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 20:21:51.396550    8315 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 20:21:51.401286    8315 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 20:21:51.401312    8315 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 20:21:51.407832    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:21:51.408939    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:21:51.438765    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:21:51.452371    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:21:51.487541    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:21:51.487567    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 20:21:51.667634    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:21:51.705278    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 20:21:51.705307    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 20:21:52.073299    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 20:21:52.073332    8315 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 20:21:52.156840    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 20:21:52.156890    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 20:21:52.182216    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 20:21:52.182260    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 20:21:52.289345    8315 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.289373    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 20:21:52.358156    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:21:52.358186    8315 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:21:52.524224    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:21:52.790466    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 20:21:52.790495    8315 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 20:21:52.867899    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 20:21:52.867926    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 20:21:52.911549    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:21:52.970452    8315 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 20:21:52.970488    8315 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 20:21:53.004660    8315 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.004687    8315 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:21:53.165475    8315 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.165505    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 20:21:53.292981    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 20:21:53.293014    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 20:21:53.388236    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 20:21:53.388266    8315 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 20:21:53.476188    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:21:53.678912    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:21:53.790164    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 20:21:53.790192    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 20:21:53.898000    8315 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:53.898021    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 20:21:54.089534    8315 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 20:21:54.089570    8315 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 20:21:54.326111    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:21:54.418621    8315 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.485861131s)
	I1120 20:21:54.418657    8315 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1120 20:21:54.662053    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 20:21:54.662081    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 20:21:54.924608    8315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-947553" context rescaled to 1 replicas
	I1120 20:21:55.256603    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 20:21:55.256640    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 20:21:55.513213    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.380124251s)
	I1120 20:21:55.513226    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.368859446s)
	I1120 20:21:55.513320    8315 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.307185785s)
	I1120 20:21:55.513363    8315 api_server.go:72] duration metric: took 5.222046626s to wait for apiserver process to appear ...
	I1120 20:21:55.513378    8315 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:21:55.513400    8315 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I1120 20:21:55.523525    8315 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I1120 20:21:55.528356    8315 api_server.go:141] control plane version: v1.34.1
	I1120 20:21:55.528379    8315 api_server.go:131] duration metric: took 14.994765ms to wait for apiserver health ...
	I1120 20:21:55.528386    8315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:21:55.548383    8315 system_pods.go:59] 10 kube-system pods found
	I1120 20:21:55.548433    8315 system_pods.go:61] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.548445    8315 system_pods.go:61] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548456    8315 system_pods.go:61] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.548466    8315 system_pods.go:61] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.548475    8315 system_pods.go:61] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.548481    8315 system_pods.go:61] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.548491    8315 system_pods.go:61] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.548496    8315 system_pods.go:61] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.548506    8315 system_pods.go:61] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.548517    8315 system_pods.go:61] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.548528    8315 system_pods.go:74] duration metric: took 20.135717ms to wait for pod list to return data ...
	I1120 20:21:55.548544    8315 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:21:55.562077    8315 default_sa.go:45] found service account: "default"
	I1120 20:21:55.562106    8315 default_sa.go:55] duration metric: took 13.552829ms for default service account to be created ...
	I1120 20:21:55.562116    8315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:21:55.573516    8315 system_pods.go:86] 10 kube-system pods found
	I1120 20:21:55.573548    8315 system_pods.go:89] "amd-gpu-device-plugin-sl95v" [bfbe4372-28d1-4dc0-ace1-e7096a3042ce] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:21:55.573556    8315 system_pods.go:89] "coredns-66bc5c9577-nfspv" [8d309416-de81-45d1-b71b-c4cc7c798862] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573563    8315 system_pods.go:89] "coredns-66bc5c9577-tpfkd" [0665c9f9-0189-46cb-bc59-193f9f333001] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:21:55.573568    8315 system_pods.go:89] "etcd-addons-947553" [8408c6f8-0ebd-4177-b2e3-267c91515404] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:21:55.573572    8315 system_pods.go:89] "kube-apiserver-addons-947553" [92274636-3b61-44b8-bc41-f1578cf45b40] Running
	I1120 20:21:55.573584    8315 system_pods.go:89] "kube-controller-manager-addons-947553" [db9fd6db-13f6-4485-b865-b648b4151171] Running
	I1120 20:21:55.573588    8315 system_pods.go:89] "kube-proxy-92nmr" [7ff384ea-1b7c-49c7-941c-86933f1f9b0a] Running
	I1120 20:21:55.573591    8315 system_pods.go:89] "kube-scheduler-addons-947553" [c602111c-919d-48c3-a66c-f5dc920fb43a] Running
	I1120 20:21:55.573595    8315 system_pods.go:89] "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:21:55.573610    8315 system_pods.go:89] "registry-creds-764b6fb674-zvz8q" [1d25f917-4040-4b9c-8bac-9d75a55b633d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:21:55.573619    8315 system_pods.go:126] duration metric: took 11.497162ms to wait for k8s-apps to be running ...
	I1120 20:21:55.573629    8315 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:21:55.573680    8315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:21:55.821435    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 20:21:55.821456    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 20:21:56.372153    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 20:21:56.372176    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 20:21:57.167628    8315 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.167657    8315 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 20:21:57.654485    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:21:57.724650    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 20:21:57.727763    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728228    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:57.728257    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:57.728455    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:57.738040    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.420656069s)
	I1120 20:21:57.738102    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.373508925s)
	I1120 20:21:58.308598    8315 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 20:21:58.564754    8315 addons.go:239] Setting addon gcp-auth=true in "addons-947553"
	I1120 20:21:58.564806    8315 host.go:66] Checking if "addons-947553" exists ...
	I1120 20:21:58.566499    8315 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 20:21:58.568681    8315 main.go:143] libmachine: domain addons-947553 has defined MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569089    8315 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:a7:2c", ip: ""} in network mk-addons-947553: {Iface:virbr1 ExpiryTime:2025-11-20 21:21:21 +0000 UTC Type:0 Mac:52:54:00:7b:a7:2c Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:addons-947553 Clientid:01:52:54:00:7b:a7:2c}
	I1120 20:21:58.569115    8315 main.go:143] libmachine: domain addons-947553 has defined IP address 192.168.39.80 and MAC address 52:54:00:7b:a7:2c in network mk-addons-947553
	I1120 20:21:58.569249    8315 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/addons-947553/id_rsa Username:docker}
	I1120 20:21:58.833314    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.424339116s)
	I1120 20:21:58.833336    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.425455784s)
	I1120 20:21:58.833402    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.394606542s)
	I1120 20:22:00.317183    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.864775691s)
	I1120 20:22:00.317236    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.649563834s)
	I1120 20:22:00.317246    8315 addons.go:480] Verifying addon ingress=true in "addons-947553"
	I1120 20:22:00.317313    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.793066584s)
	I1120 20:22:00.317374    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.405778801s)
	I1120 20:22:00.317401    8315 addons.go:480] Verifying addon registry=true in "addons-947553"
	I1120 20:22:00.317473    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.841250467s)
	I1120 20:22:00.317500    8315 addons.go:480] Verifying addon metrics-server=true in "addons-947553"
	I1120 20:22:00.317549    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.638598976s)
	I1120 20:22:00.318753    8315 out.go:179] * Verifying ingress addon...
	I1120 20:22:00.319477    8315 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-947553 service yakd-dashboard -n yakd-dashboard
	
	I1120 20:22:00.319499    8315 out.go:179] * Verifying registry addon...
	I1120 20:22:00.321062    8315 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 20:22:00.321882    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 20:22:00.330255    8315 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:22:00.330274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:00.330580    8315 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:22:00.330602    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.843037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:00.862027    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.136755    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.810594192s)
	I1120 20:22:01.136799    8315 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.563097568s)
	W1120 20:22:01.136810    8315 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136824    8315 system_svc.go:56] duration metric: took 5.563190734s WaitForService to wait for kubelet
	I1120 20:22:01.136838    8315 retry.go:31] will retry after 297.745206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:22:01.136835    8315 kubeadm.go:587] duration metric: took 10.845518493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:22:01.136866    8315 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:22:01.169336    8315 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1120 20:22:01.169377    8315 node_conditions.go:123] node cpu capacity is 2
	I1120 20:22:01.169391    8315 node_conditions.go:105] duration metric: took 32.519256ms to run NodePressure ...
	I1120 20:22:01.169403    8315 start.go:242] waiting for startup goroutines ...
	I1120 20:22:01.357701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:01.358795    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.434928    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:22:01.868679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:01.868782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.346294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.352833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.862753    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:02.890512    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:02.996195    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.34165692s)
	I1120 20:22:02.996225    8315 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.429699726s)
	I1120 20:22:02.996254    8315 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-947553"
	I1120 20:22:02.997930    8315 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:22:02.997950    8315 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 20:22:02.999363    8315 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 20:22:02.999980    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 20:22:03.000816    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 20:22:03.000833    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 20:22:03.047631    8315 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:22:03.047661    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.095774    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 20:22:03.095800    8315 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 20:22:03.172675    8315 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.172696    8315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 20:22:03.258447    8315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:22:03.328725    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.328999    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:03.506980    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:03.835051    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:03.838342    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.009598    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.059484    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.624514335s)
	I1120 20:22:04.342509    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.346146    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:04.552392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:04.655990    8315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.397510493s)
	I1120 20:22:04.657251    8315 addons.go:480] Verifying addon gcp-auth=true in "addons-947553"
	I1120 20:22:04.658765    8315 out.go:179] * Verifying gcp-auth addon...
	I1120 20:22:04.660962    8315 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 20:22:04.689345    8315 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 20:22:04.689379    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:04.830184    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:04.831805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.008119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.171353    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.336728    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.336869    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:05.517754    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:05.671439    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:05.828977    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:05.832656    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.008324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.167007    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:06.327339    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.505702    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:06.665077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:06.831323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:06.832004    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.005311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.170575    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.326420    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.330401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:07.504324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:07.665313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:07.827482    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:07.830140    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.005717    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.168657    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.325483    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.326808    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:08.508047    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:08.664546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:08.828313    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:08.829419    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.004761    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.165417    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.325923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.327133    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:09.503806    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:09.665158    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:09.827304    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:09.828458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.005165    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.164419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.328020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:10.328899    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.503540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:10.665211    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:10.827565    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:10.828293    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.007088    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.172637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.329792    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.330515    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:11.506127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:11.666152    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:11.832352    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:11.832833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.009397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.164503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.324601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:12.330001    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.557333    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:12.690799    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:12.826246    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:12.827168    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.004570    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.166124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.330939    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:13.334724    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.505747    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:13.664947    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:13.826640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:13.827501    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.005488    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.172285    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.325676    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.327874    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:14.505478    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:14.665377    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:14.828164    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:14.828324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.004108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.165356    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.332218    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.345244    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:15.505401    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:15.665824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:15.827117    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:15.827311    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.006364    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.177517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.340592    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.341189    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:16.504797    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:16.664830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:16.830245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:16.830443    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.005532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.167264    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.330014    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.331394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:17.559675    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:17.678477    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:17.826495    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:17.832794    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.005502    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.166351    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.327573    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.327734    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:18.503894    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:18.666269    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:18.830279    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:18.832316    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.005728    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.166452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.327371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.329317    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:19.506362    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:19.670606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:19.831060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:19.832764    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.004618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.166635    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.327601    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:20.327638    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.504392    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:20.665742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:20.827471    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:20.829616    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.004605    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.169921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.333272    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:21.336011    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.504542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:21.665682    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:21.825419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:21.828055    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.004227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.164229    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.326927    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.332370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:22.505033    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:22.666978    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:22.834204    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:22.836963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.168623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.430297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.433691    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:23.508735    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:23.667674    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:23.836886    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:23.837245    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.005900    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.169110    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.326634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.327904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:24.673297    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:24.673506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:24.830570    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:24.831631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.009064    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.164922    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.325762    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.327935    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:25.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:25.667618    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:25.827414    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:25.828623    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.005073    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.167711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.326679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.327247    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:26.505503    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:26.665655    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:26.825436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:26.828500    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.005840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.167830    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.328527    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:27.328746    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.506666    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:27.666716    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:27.832531    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:27.833632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.006766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.165323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.327708    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.328341    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:22:28.506036    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:28.666241    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:28.944433    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:28.944810    8315 kapi.go:107] duration metric: took 28.622926025s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 20:22:29.006863    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.167687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.328145    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:29.504218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:29.664460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:29.827372    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.004445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.164822    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.324811    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:30.504410    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:30.665044    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:30.825337    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.004318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.164385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.325406    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:31.505029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:31.665134    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:31.825650    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.004127    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.166139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.324701    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:32.504614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:32.664944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:32.825143    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.004577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.165685    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.325974    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:33.704460    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:33.708873    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:33.825075    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.004596    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.165867    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.325611    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:34.504800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:34.665454    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:34.825871    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.004177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.164697    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.326110    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:35.503481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:35.664737    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:35.826308    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.004218    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.165000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.324326    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:36.503689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:36.666782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:36.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.005202    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.164053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.325572    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:37.505330    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:37.664284    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:37.825262    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.004289    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.164481    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.326051    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:38.503226    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:38.664232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:38.824502    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.004487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.164963    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.325878    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:39.505209    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:39.664636    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:39.825100    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.003777    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.165642    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:40.504393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:40.664821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:40.824897    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.004355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.164546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.326024    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:41.504280    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:41.664217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:41.825780    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.005113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.164701    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.325297    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:42.504448    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:42.665577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:42.824743    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.004833    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.165891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.326070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:43.503696    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:43.664800    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:43.826756    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.005306    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.164704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.325455    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:44.505302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:44.664815    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:44.824692    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.003742    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.164950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.325614    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:45.504532    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:45.664827    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:45.826405    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.003951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.165370    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.325730    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:46.505387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:46.664689    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:46.825033    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.004484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.165449    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.325798    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:47.504952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:47.665632    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:47.825364    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.003790    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.165543    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.324818    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:48.504519    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:48.664630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:48.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.003721    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.164517    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.326505    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:49.504416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:49.664711    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:49.825942    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.004200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.164578    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.325328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:50.503484    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:50.665421    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:50.825294    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.004287    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.164268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.325315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:51.504380    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:51.665173    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:51.825228    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.004294    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.165271    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.325922    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:52.504540    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:52.664739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:52.825458    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.003930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.165838    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.325362    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:53.503610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:53.664870    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:53.827535    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.004328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.164077    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.324281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:54.504388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:54.665303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:54.825120    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.004586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.164561    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.325150    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:55.504219    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:55.664405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:55.826068    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.004103    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.164821    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.325311    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:56.504506    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:56.664957    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:56.825313    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.004010    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.164442    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.325029    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:57.504374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:57.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:57.825231    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.005792    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.325160    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:58.504029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:58.663903    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:58.825149    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.005092    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.164148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.324606    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:22:59.506476    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:22:59.664372    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:22:59.825198    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.005082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.164250    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.326383    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:00.503808    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:00.665909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:00.825874    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.004396    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.164829    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.326451    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:01.504153    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:01.664393    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:01.825331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.004168    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.165403    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.325338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:02.504355    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:02.664961    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:02.826305    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.003577    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.165374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.325222    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:03.503643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:03.665037    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:03.824710    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.004671    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.166844    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.325995    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:04.503907    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:04.665203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:04.825349    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.003990    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.163740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.325833    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:05.504450    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:05.665053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:05.824804    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.005371    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.164513    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.324904    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:06.504771    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:06.665389    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:06.825137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.003665    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.165006    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.325121    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:07.504075    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:07.665109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:07.824752    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.005627    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.165094    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.325074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:08.504510    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:08.665363    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:08.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.004201    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.165446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.328697    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:09.504259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:09.664453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:09.825519    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.005404    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.164687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.325987    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:10.504122    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:10.664875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:10.826159    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.003419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.164744    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.325475    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:11.504220    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:11.664757    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:11.825474    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.004170    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.164955    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.325525    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:12.503631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:12.665991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:12.825430    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.003813    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.165098    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.325081    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:13.505315    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:13.665028    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:13.824542    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.005048    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.164487    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.325020    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:14.505722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:14.665177    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:14.824929    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.004788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.165203    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.324423    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:15.504085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:15.664347    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:15.825592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.007081    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.164221    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:16.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:16.664640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:16.825185    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.004104    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.165054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.325282    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:17.503452    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:17.665265    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:17.824735    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.004695    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.164715    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.325314    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:18.503892    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:18.666272    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:18.824679    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.004416    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.164791    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.326105    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:19.504065    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:19.664586    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:19.825391    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.004785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.164970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.325404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:20.503939    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:20.665093    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:20.824880    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.004871    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.165473    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.325426    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:21.505660    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:21.664949    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:21.825911    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.006475    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.164603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.325683    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:22.504419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:22.664842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:22.825338    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.003647    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.165240    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.326436    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:23.506070    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:23.664446    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:23.824867    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.005086    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.163951    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.325452    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:24.504677    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:24.665161    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:24.826375    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.004842    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.164847    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.325155    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:25.504019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:25.665239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:25.824773    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.005740    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.165126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.324566    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:26.504021    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:26.665217    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:26.825011    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.003550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.165239    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.325538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:27.503904    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:27.664722    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:27.825083    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.004187    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.166259    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:28.504236    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:28.664582    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:28.825165    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.003447    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.164432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.325158    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:29.504121    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:29.664009    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:29.825082    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.004052    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.165479    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.328054    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:30.504976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:30.667464    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:30.824784    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.004256    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.166254    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.329074    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:31.504429    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:31.668785    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:31.834378    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.012921    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.182382    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.328273    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:32.512432    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:32.668839    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:32.828146    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.010373    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.171918    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.327438    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:33.508687    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:33.668358    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:33.825953    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.005514    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.169126    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.328834    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:34.508779    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:34.665012    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:34.828137    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.004394    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.166928    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.325934    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:35.505139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:35.664302    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:35.826453    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.009232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.164433    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.326221    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:36.503774    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:36.668019    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:36.828315    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.003923    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.171231    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.329115    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:37.504101    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:37.665063    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:37.827549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.008085    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.165142    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.325522    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:38.504378    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:38.664419    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:38.826131    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.003818    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.169232    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.324564    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:39.504485    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:39.668374    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:39.828255    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.006466    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.166014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.327358    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:40.510974    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:40.670391    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:40.826816    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.005686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.164891    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.328274    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:41.503673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:41.665805    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:41.825384    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.007673    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.164828    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.329991    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:42.507109    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:42.666970    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:42.827404    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.006050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.165530    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.336903    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:43.508108    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:43.665050    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:43.828179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.004826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.168465    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.327802    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:44.588926    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:44.686035    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:44.836096    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.013912    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.170060    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.330109    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:45.506461    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:45.666266    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:45.833355    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.012759    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.165788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.331536    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:46.544743    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:46.668681    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:46.826281    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.004579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.164501    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.325301    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:47.510314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:47.664541    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:47.825733    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.005390    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.164631    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.325040    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:48.503952    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:48.666328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:48.824449    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.004387    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.165135    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.324520    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:49.504929    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:49.665257    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:49.825179    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.004248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.164504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.326488    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:50.504139    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:50.665131    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:50.825464    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.004233    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.165223    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.324723    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:51.505340    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:51.665910    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:51.824647    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.004550    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.324772    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:52.504303    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:52.667291    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:52.825223    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.004148    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.164388    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.325070    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:53.503625    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:53.665901    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:53.826412    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.003441    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.164614    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.325319    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:54.505054    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:54.665324    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:54.825610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.004621    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.165405    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.326233    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:55.503470    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:55.665016    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:55.825575    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.004511    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.165472    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.325694    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:56.504017    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:56.663700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:56.825810    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.004323    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.165204    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.324888    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:57.504535    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:57.664639    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:57.825026    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.003739    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.165764    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.325045    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:58.503360    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:58.664840    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:58.826605    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.003999    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.165275    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.325421    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:23:59.504637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:23:59.665014    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:23:59.824766    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.005128    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.164263    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.325333    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:00.504062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:00.664931    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:00.826290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.004640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.164832    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.325901    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:01.505129    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:01.664227    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:01.824719    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.004950    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.165053    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.325360    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:02.505959    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:02.664868    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:02.826277    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.004096    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.164445    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.324757    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:03.505252    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:03.665119    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:03.824454    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.004909    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.165591    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.325118    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:04.507564    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:04.664700    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:04.826799    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.005349    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.165155    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.324582    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:05.504443    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:05.665778    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:05.825741    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.004414    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.164474    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.326066    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:06.503776    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:06.664979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:06.826056    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.003318    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.164124    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.324310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:07.503413    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:07.664606    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:07.824831    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.004542    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.165571    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.325290    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:08.503944    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:08.666366    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:08.825256    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.003826    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.165200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.324763    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:09.505835    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:09.665113    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:09.824632    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.004172    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.164462    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.324992    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:10.503686    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:10.664930    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:10.825754    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.004000    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.163782    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.325549    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:11.504780    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:11.665314    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:11.825684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.004180    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.164082    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.324141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:12.504612    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:12.664748    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:12.825910    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.004630    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.165026    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.325684    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:13.504463    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:13.664189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:13.824224    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.004212    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.165015    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.324331    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:14.507504    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:14.664678    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:14.826028    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.004824    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.165312    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.325310    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:15.503525    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:15.664637    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:15.825538    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.005397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.165397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.324350    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:16.504613    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:16.665640    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:16.825950    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.004189    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.167663    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.326720    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:17.508041    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:17.665546    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:17.828365    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.004058    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.165184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.325634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:18.504817    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:18.668489    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:18.828972    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.005704    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.167268    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.334698    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:19.507751    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:19.667328    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:19.831249    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.005669    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.167145    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.328610    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:20.504643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:20.666213    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:20.830891    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.006991    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.167023    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.326125    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:21.512788    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:21.665384    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:21.829776    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.003972    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.170397    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.324898    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:22.505825    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:22.665603    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:22.827634    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.007579    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.168453    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.327180    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:23.503837    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:23.665184    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:23.824592    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.005482    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.164766    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.330141    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:24.504539    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:24.667427    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:24.835328    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.139729    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.240898    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.326048    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:25.505595    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:25.670610    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:25.827986    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.007659    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.164981    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.331893    8315 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:24:26.505078    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:26.665057    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:26.824303    8315 kapi.go:107] duration metric: took 2m26.503242857s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 20:24:27.004029    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.164962    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:27.504834    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:27.668267    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.007248    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.166983    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:28.507055    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:28.666163    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.005997    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.328979    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:24:29.505976    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:29.669956    8315 kapi.go:107] duration metric: took 2m25.008991629s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 20:24:29.672108    8315 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-947553 cluster.
	I1120 20:24:29.673437    8315 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 20:24:29.674752    8315 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 20:24:30.011875    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:30.506718    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.005946    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:31.508062    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.004768    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:32.513385    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.006643    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:33.504200    8315 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:24:34.004984    8315 kapi.go:107] duration metric: took 2m31.004999967s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 20:24:34.006745    8315 out.go:179] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1120 20:24:34.007905    8315 addons.go:515] duration metric: took 2m43.716565511s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1120 20:24:34.007942    8315 start.go:247] waiting for cluster config update ...
	I1120 20:24:34.007968    8315 start.go:256] writing updated cluster config ...
	I1120 20:24:34.008267    8315 ssh_runner.go:195] Run: rm -f paused
	I1120 20:24:34.016789    8315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:34.020696    8315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.026522    8315 pod_ready.go:94] pod "coredns-66bc5c9577-tpfkd" is "Ready"
	I1120 20:24:34.026545    8315 pod_ready.go:86] duration metric: took 5.821939ms for pod "coredns-66bc5c9577-tpfkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.029616    8315 pod_ready.go:83] waiting for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.035420    8315 pod_ready.go:94] pod "etcd-addons-947553" is "Ready"
	I1120 20:24:34.035447    8315 pod_ready.go:86] duration metric: took 5.807107ms for pod "etcd-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.038012    8315 pod_ready.go:83] waiting for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.042359    8315 pod_ready.go:94] pod "kube-apiserver-addons-947553" is "Ready"
	I1120 20:24:34.042389    8315 pod_ready.go:86] duration metric: took 4.353428ms for pod "kube-apiserver-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.045156    8315 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.421067    8315 pod_ready.go:94] pod "kube-controller-manager-addons-947553" is "Ready"
	I1120 20:24:34.421095    8315 pod_ready.go:86] duration metric: took 375.9154ms for pod "kube-controller-manager-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:34.622667    8315 pod_ready.go:83] waiting for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.021658    8315 pod_ready.go:94] pod "kube-proxy-92nmr" is "Ready"
	I1120 20:24:35.021685    8315 pod_ready.go:86] duration metric: took 398.990446ms for pod "kube-proxy-92nmr" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.222270    8315 pod_ready.go:83] waiting for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621176    8315 pod_ready.go:94] pod "kube-scheduler-addons-947553" is "Ready"
	I1120 20:24:35.621208    8315 pod_ready.go:86] duration metric: took 398.900241ms for pod "kube-scheduler-addons-947553" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:24:35.621225    8315 pod_ready.go:40] duration metric: took 1.604402122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:24:35.668514    8315 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:24:35.670410    8315 out.go:179] * Done! kubectl is now configured to use "addons-947553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 20:28:26 addons-947553 crio[815]: time="2025-11-20 20:28:26.990122927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670506990095901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4fa2e47-25bb-4741-b4c1-f663ed3bab61 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:28:26 addons-947553 crio[815]: time="2025-11-20 20:28:26.991262494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cdcb258-5c31-4f5e-b66e-173b87017040 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:26 addons-947553 crio[815]: time="2025-11-20 20:28:26.991416634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cdcb258-5c31-4f5e-b66e-173b87017040 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:26 addons-947553 crio[815]: time="2025-11-20 20:28:26.992220743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plu
gin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:
0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85
ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d659
7086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cdcb258-5c31-4f5e-b66e-173b87017040 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.035277017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52caf934-83c1-4975-ade5-db10b8c87681 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.035377255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52caf934-83c1-4975-ade5-db10b8c87681 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.037692420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ea3d79e-dbed-45cf-9953-3b960e247f08 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.038851800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670507038823751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ea3d79e-dbed-45cf-9953-3b960e247f08 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.039963707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c6094a1-ca39-4c11-95c1-ebae7f938584 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.040027564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c6094a1-ca39-4c11-95c1-ebae7f938584 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.040604132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plu
gin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:
0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85
ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d659
7086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c6094a1-ca39-4c11-95c1-ebae7f938584 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.073558706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01a8c370-345f-49ca-9604-c41821504cb0 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.073655377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01a8c370-345f-49ca-9604-c41821504cb0 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.075275119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=672b0d6d-1898-4581-a361-fca5a06ee286 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.076697962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670507076644655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=672b0d6d-1898-4581-a361-fca5a06ee286 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.077912480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3cc0efb4-c79d-4f61-8c25-73896f37d204 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.077988805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3cc0efb4-c79d-4f61-8c25-73896f37d204 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.078438095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plu
gin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:
0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85
ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d659
7086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3cc0efb4-c79d-4f61-8c25-73896f37d204 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.111301151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70df1ea8-db73-4b92-afdc-ab652fb3f644 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.111372480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70df1ea8-db73-4b92-afdc-ab652fb3f644 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.112850248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a565da9-76da-489d-8047-4559d0d604f3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.115185824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763670507115096683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484697,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a565da9-76da-489d-8047-4559d0d604f3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.116440786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a653be7-1bbf-4bff-a2cc-32d9a1a512cc name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.116779223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a653be7-1bbf-4bff-a2cc-32d9a1a512cc name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:28:27 addons-947553 crio[815]: time="2025-11-20 20:28:27.117452033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:83c7cffc192d1eac6bf6d0f6c07019ffe26d06bc6020e282d4512b3adfe9c49f,PodSandboxId:30b4f748049f457a96b19a523a33b9138f60e671cd770d68978a3efe02ca1a03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763670279170271077,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 709b0bdb-dd50-4d23-b6f1-1f659e2347cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1182df9d08d19dc3b5f8650597a3aac10b8daf74d976f80fcec403c26e34481c,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763670273023625087,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c592e1a3ecfd79d1e1f186091bfe341d07d501ba31d70ed66132a85c197aef7,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763670271024212410,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26090ac2445292ca8fb3e540927fb5322b96c6173b620156078e83feacef93e,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763670266299978842,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d8b656975545fa72d155af706e36e1e0cf35ed1a81e4df507c6f18a4b73cc6,PodSandboxId:0a1212c05ea885806589aa446672e17ce1c13f1d36c0cf197d6b2717f8eb2e2e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763670265541604903,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-6hpj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b8dafe03-8e55-485a-ace3-f516c99
50d0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a781be0336bcbc91a90c1b5f2dffb18fe314607d0bf7d9904f92929eb66ece44,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763670258100321628,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7f17ef5a53826e2e18e2f067c4e0be92d10343bab0c44052b16945f5c0d7873,PodSandboxId:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763670226692031527,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb8563d67522d6199bda06521965feba0d412683ceec715175b5d9347c740977,PodSandboxId:367d0442cb7aae9b5b3a75c4e999f8839e3b88685bffcf0f9c407911504a7638,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763670224731409160,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84205e8a-23e5-4ebe-b33f-a46942296f86,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68eba1ff29e5c176ffded811901de77ec04e3f876021f59f5863d0942a2a246b,PodSandboxId:77498a7d4320e1143ff1142445925ac37ea3f5a9ca029ee0b33aad2412a4a31e,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763670222913456316,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868333f0-5dbf-483b-b7ee-43b7b6a4f181,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4189eecca6982b7cdb1f6ca8e01dd874a652ceb7555a3d0aaa87f9b7194b41fa,PodSandboxId:64e4a94a11b34e7a83cc3b5b322775b9e35023c372731e2639f005700f31549f,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670221458630426,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-7n9bg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbdfa730-2213-42b5-b013-00295af0ba71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13c5a7e788c0c68c000a8dc15c02d7c43e80e9350fd2b3813ec7a3be3b2f364,PodSandbox
Id:3a8d7fc532be92b0f88d921ab1a9a62cff0698f3e59db31a662c5c2a9c620ca8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763670221288545980,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-xtf7r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2c6b57-2689-48d9-9302-31ea71357362,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ebdc020b24013b49283ac68913053fb44795934afac8bb1d83633808a488838a,PodSandboxId:aab95fc7e29c51b064d7f4b4e5303a85cee916d25d3367f30d511978623e039d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670219646990583,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xqmtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0fce45b-b2f5-4a6c-b526-3e3ca554c036,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d944607d06de9f7b1b32a6f4e64b8b2ae47235a975d3d9176a6c832fb34c14,PodSandboxId:f811a556e97294d5e11502aacb29c0998f2b0ac8f445c23c6ad44dd9753ed457,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763670219493867663,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-944pl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b562952-6c74-4bb4-ab74-47dbf0c99b00,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf24d40d09d977f69e1b08b5e19c48da4411910d3449e1d0a2bbc777db944dad,PodSandboxId:b81a00087e290b52fcf3a9ab89dc176036911ed71fbe5dec2500da4d36ad0d90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763670217749957898,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-whk72,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55c0ce11-3b32-411e-9861-c3ef19416d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7581f788bba24b62fafa76e3d963256cf8ee7000bc85e08938965942de49d3bd,PodSandboxId:402b0cbd3903b8cdea36adcd4f0c9f425b6855d1790d160c8372f64a376bb50c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763670149740471174,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-znfrl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f9329f2b-eaa2-4b45-b91d-3433062e9ac0,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed48acc4e6b6d162feedcecc92d299d2f1ad1835fb41bbd6426329a3a1f7bd3,PodSandboxId:e08ae02d9782106a9536ab824c3dfee650ecdac36021a3e103e870ce70d991e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763670144816044135,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3988d2f6-2df1-49e8-8aa5-cf6529799ce0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806,PodSandboxId:7a8aea6b568732441641364fb64cc00a164fbfa5c26edd9c3261098e72af8486,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763670121103318950,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: ad582478-b86f-4230-9f35-836dfdfac5de,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc04223232fbc2e8d1fbc458a4e943e392f2bb9b53678ddcb01c0f079161353e,PodSandboxId:1c75fb61317d99f3fe0523a2d6f2c326b2a2194eca48b8246ab6696e75bed6fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763670120259194107,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plu
gin-sl95v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfbe4372-28d1-4dc0-ace1-e7096a3042ce,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86,PodSandboxId:1b8aec92deac04af24cb173fc780adbcb1d22a24cd1f5449ab5370897d820c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763670112307438406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tpfkd,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 0665c9f9-0189-46cb-bc59-193f9f333001,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf,PodSandboxId:44459bb4c1592ebd166422a7da472de03740d4b2233afd8124d3f65e69733841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763670111402173591,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-92nmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff384ea-1b7c-49c7-941c-86933f1f9b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b,PodSandboxId:7854300bd65f2eb7007136254fccdd03b836ea19e6d5d236ea4f0d3324b68c3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763670099955370077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaf9c8220305171251451e6ff3491ef0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2,PodSandboxId:c0df804390cc352e077376a00132661c86d1f39a13b50fe51dcc814ca154cbab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:
0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763670099917018164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1563a6fc8f372e84c559079393d0798,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45,PodSandboxId:959ac708555005b85
ad0e47e8ece95d8e7b40c5b7786d49e8422d8cf1831d844,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763670099880035650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100ae3428c2e35d8e1cf2deaa80d6526,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f04fbc5a9a9df207d659
7086b68edf7fb688fef37434c83a09a37653c2cf2be,PodSandboxId:c73098b299e79661f7b30974b512a87fa9096006f0af3ce3968bf4900c393430,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763670099881296813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-947553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3e11beb64217e1f3209d29f540719d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a653be7-1bbf-4bff-a2cc-32d9a1a512cc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	83c7cffc192d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   30b4f748049f4       busybox                                    default
	1182df9d08d19       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	3c592e1a3ecfd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	a26090ac24452       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            4 minutes ago       Running             liveness-probe                           0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	d3d8b65697554       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             4 minutes ago       Running             controller                               0                   0a1212c05ea88       ingress-nginx-controller-6c8bf45fb-6hpj8   ingress-nginx
	a781be0336bcb       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           4 minutes ago       Running             hostpath                                 0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	c7f17ef5a5382       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                4 minutes ago       Running             node-driver-registrar                    0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	fb8563d67522d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              4 minutes ago       Running             csi-resizer                              0                   367d0442cb7aa       csi-hostpath-resizer-0                     kube-system
	68eba1ff29e5c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             4 minutes ago       Running             csi-attacher                             0                   77498a7d4320e       csi-hostpath-attacher-0                    kube-system
	4189eecca6982       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   64e4a94a11b34       snapshot-controller-7d9fbc56b8-7n9bg       kube-system
	b13c5a7e788c0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   4 minutes ago       Running             csi-external-health-monitor-controller   0                   3a8d7fc532be9       csi-hostpathplugin-xtf7r                   kube-system
	ebdc020b24013       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   4 minutes ago       Exited              patch                                    0                   aab95fc7e29c5       ingress-nginx-admission-patch-xqmtg        ingress-nginx
	30d944607d06d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   f811a556e9729       snapshot-controller-7d9fbc56b8-944pl       kube-system
	cf24d40d09d97       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   4 minutes ago       Exited              create                                   0                   b81a00087e290       ingress-nginx-admission-create-whk72       ingress-nginx
	7581f788bba24       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             5 minutes ago       Running             local-path-provisioner                   0                   402b0cbd3903b       local-path-provisioner-648f6765c9-znfrl    local-path-storage
	3ed48acc4e6b6       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               6 minutes ago       Running             minikube-ingress-dns                     0                   e08ae02d97821       kube-ingress-dns-minikube                  kube-system
	1f0a03ae88dd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             6 minutes ago       Running             storage-provisioner                      0                   7a8aea6b56873       storage-provisioner                        kube-system
	dc04223232fbc       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     6 minutes ago       Running             amd-gpu-device-plugin                    0                   1c75fb61317d9       amd-gpu-device-plugin-sl95v                kube-system
	44ea167ad7358       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             6 minutes ago       Running             coredns                                  0                   1b8aec92deac0       coredns-66bc5c9577-tpfkd                   kube-system
	107772b7cd302       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             6 minutes ago       Running             kube-proxy                               0                   44459bb4c1592       kube-proxy-92nmr                           kube-system
	1d2feff972c82       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             6 minutes ago       Running             kube-scheduler                           0                   7854300bd65f2       kube-scheduler-addons-947553               kube-system
	3ce144c0d06ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             6 minutes ago       Running             kube-apiserver                           0                   c0df804390cc3       kube-apiserver-addons-947553               kube-system
	3f04fbc5a9a9d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             6 minutes ago       Running             kube-controller-manager                  0                   c73098b299e79       kube-controller-manager-addons-947553      kube-system
	1b4f51aca4917       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             6 minutes ago       Running             etcd                                     0                   959ac70855500       etcd-addons-947553                         kube-system
	
	
	==> coredns [44ea167ad7358fab588f083040ccca0863d5d9406d5250085fbbb77d84b29f86] <==
	[INFO] 10.244.0.8:38281 - 13381 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000419309s
	[INFO] 10.244.0.8:38281 - 4239 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000335145s
	[INFO] 10.244.0.8:38281 - 63093 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000099875s
	[INFO] 10.244.0.8:38281 - 4801 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008321s
	[INFO] 10.244.0.8:38281 - 39674 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000264028s
	[INFO] 10.244.0.8:38281 - 62546 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124048s
	[INFO] 10.244.0.8:38281 - 16805 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000647057s
	[INFO] 10.244.0.8:51997 - 13985 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160466s
	[INFO] 10.244.0.8:51997 - 14298 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000220652s
	[INFO] 10.244.0.8:45076 - 61133 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125223s
	[INFO] 10.244.0.8:45076 - 60865 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000152664s
	[INFO] 10.244.0.8:36522 - 44178 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060404s
	[INFO] 10.244.0.8:36522 - 43995 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078705s
	[INFO] 10.244.0.8:59475 - 4219 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116054s
	[INFO] 10.244.0.8:59475 - 4422 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010261s
	[INFO] 10.244.0.23:44890 - 42394 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000390546s
	[INFO] 10.244.0.23:40413 - 38581 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001287022s
	[INFO] 10.244.0.23:48952 - 288 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001963576s
	[INFO] 10.244.0.23:45971 - 54062 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002169261s
	[INFO] 10.244.0.23:46787 - 19498 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139649s
	[INFO] 10.244.0.23:50609 - 21977 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000067547s
	[INFO] 10.244.0.23:44756 - 29378 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.005330443s
	[INFO] 10.244.0.23:59657 - 39385 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005346106s
	[INFO] 10.244.0.27:42107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000463345s
	[INFO] 10.244.0.27:53096 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000254044s
	
	
	==> describe nodes <==
	Name:               addons-947553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-947553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-947553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_21_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-947553
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-947553"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:21:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-947553
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:28:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:26:21 +0000   Thu, 20 Nov 2025 20:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    addons-947553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ab490c5e4f046af88ecdee8117466b4
	  System UUID:                2ab490c5-e4f0-46af-88ec-dee8117466b4
	  Boot ID:                    1ea0245c-4d70-493b-9a36-f639a36dba5f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-6hpj8                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         6m28s
	  kube-system                 amd-gpu-device-plugin-sl95v                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 coredns-66bc5c9577-tpfkd                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m37s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 csi-hostpathplugin-xtf7r                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 etcd-addons-947553                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m42s
	  kube-system                 kube-apiserver-addons-947553                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-controller-manager-addons-947553                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-proxy-92nmr                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-scheduler-addons-947553                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 snapshot-controller-7d9fbc56b8-7n9bg                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 snapshot-controller-7d9fbc56b8-944pl                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  local-path-storage          helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  local-path-storage          local-path-provisioner-648f6765c9-znfrl                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-nqz6v                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m35s  kube-proxy       
	  Normal  Starting                 6m42s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m42s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m42s  kubelet          Node addons-947553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s  kubelet          Node addons-947553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s  kubelet          Node addons-947553 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m41s  kubelet          Node addons-947553 status is now: NodeReady
	  Normal  RegisteredNode           6m38s  node-controller  Node addons-947553 event: Registered Node addons-947553 in Controller
	
	
	==> dmesg <==
	[  +0.135292] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.656096] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.754334] kauditd_printk_skb: 318 callbacks suppressed
	[Nov20 20:22] kauditd_printk_skb: 302 callbacks suppressed
	[  +3.551453] kauditd_printk_skb: 395 callbacks suppressed
	[  +6.168214] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.651247] kauditd_printk_skb: 17 callbacks suppressed
	[Nov20 20:23] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.679825] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.000053] kauditd_printk_skb: 157 callbacks suppressed
	[  +5.059481] kauditd_printk_skb: 109 callbacks suppressed
	[Nov20 20:24] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.445964] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.477031] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.089818] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:25] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.536974] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.509608] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000043] kauditd_printk_skb: 22 callbacks suppressed
	[Nov20 20:26] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.002720] kauditd_printk_skb: 50 callbacks suppressed
	[ +11.737417] kauditd_printk_skb: 103 callbacks suppressed
	[Nov20 20:27] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [1b4f51aca4917c05f3fb4b6d847e9051e8eff9d58deaa88526f022fb3b5a2f45] <==
	{"level":"info","ts":"2025-11-20T20:23:44.570260Z","caller":"traceutil/trace.go:172","msg":"trace[663488031] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1136; }","duration":"154.066668ms","start":"2025-11-20T20:23:44.416165Z","end":"2025-11-20T20:23:44.570231Z","steps":["trace[663488031] 'read index received'  (duration: 154.021094ms)","trace[663488031] 'applied index is now lower than readState.Index'  (duration: 44.411µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:23:44.570877Z","caller":"traceutil/trace.go:172","msg":"trace[715433296] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"233.967936ms","start":"2025-11-20T20:23:44.336900Z","end":"2025-11-20T20:23:44.570868Z","steps":["trace[715433296] 'process raft request'  (duration: 233.871288ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571611Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.483381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:23:44.571673Z","caller":"traceutil/trace.go:172","msg":"trace[884414279] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1098; }","duration":"111.548598ms","start":"2025-11-20T20:23:44.460117Z","end":"2025-11-20T20:23:44.571666Z","steps":["trace[884414279] 'agreement among raft nodes before linearized reading'  (duration: 111.465445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:23:44.571061Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.869609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.80\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-11-20T20:23:44.571810Z","caller":"traceutil/trace.go:172","msg":"trace[1446846650] range","detail":"{range_begin:/registry/masterleases/192.168.39.80; range_end:; response_count:1; response_revision:1098; }","duration":"155.64428ms","start":"2025-11-20T20:23:44.416161Z","end":"2025-11-20T20:23:44.571805Z","steps":["trace[1446846650] 'agreement among raft nodes before linearized reading'  (duration: 154.810085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:23:46.528477Z","caller":"traceutil/trace.go:172","msg":"trace[982384876] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"154.809492ms","start":"2025-11-20T20:23:46.373650Z","end":"2025-11-20T20:23:46.528459Z","steps":["trace[982384876] 'process raft request'  (duration: 154.328485ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.123570Z","caller":"traceutil/trace.go:172","msg":"trace[1335763238] linearizableReadLoop","detail":"{readStateIndex:1253; appliedIndex:1253; }","duration":"134.10576ms","start":"2025-11-20T20:24:24.989438Z","end":"2025-11-20T20:24:25.123544Z","steps":["trace[1335763238] 'read index received'  (duration: 134.100119ms)","trace[1335763238] 'applied index is now lower than readState.Index'  (duration: 5.092µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:25.123838Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.381481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-11-20T20:24:25.123864Z","caller":"traceutil/trace.go:172","msg":"trace[1178674559] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1204; }","duration":"134.473479ms","start":"2025-11-20T20:24:24.989384Z","end":"2025-11-20T20:24:25.123857Z","steps":["trace[1178674559] 'agreement among raft nodes before linearized reading'  (duration: 134.302699ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:24:25.124126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.465459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:25.124145Z","caller":"traceutil/trace.go:172","msg":"trace[392254424] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1205; }","duration":"131.486967ms","start":"2025-11-20T20:24:24.992652Z","end":"2025-11-20T20:24:25.124139Z","steps":["trace[392254424] 'agreement among raft nodes before linearized reading'  (duration: 131.453666ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:25.124311Z","caller":"traceutil/trace.go:172","msg":"trace[1682962710] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"237.606056ms","start":"2025-11-20T20:24:24.886699Z","end":"2025-11-20T20:24:25.124305Z","steps":["trace[1682962710] 'process raft request'  (duration: 237.320378ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.314678Z","caller":"traceutil/trace.go:172","msg":"trace[1797119853] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1279; }","duration":"155.702658ms","start":"2025-11-20T20:24:29.158960Z","end":"2025-11-20T20:24:29.314662Z","steps":["trace[1797119853] 'read index received'  (duration: 155.696769ms)","trace[1797119853] 'applied index is now lower than readState.Index'  (duration: 4.683µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T20:24:29.314797Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.822209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:24:29.314815Z","caller":"traceutil/trace.go:172","msg":"trace[163313341] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1230; }","duration":"155.853309ms","start":"2025-11-20T20:24:29.158956Z","end":"2025-11-20T20:24:29.314809Z","steps":["trace[163313341] 'agreement among raft nodes before linearized reading'  (duration: 155.793828ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:29.315341Z","caller":"traceutil/trace.go:172","msg":"trace[932727743] transaction","detail":"{read_only:false; response_revision:1231; number_of_response:1; }","duration":"158.601334ms","start":"2025-11-20T20:24:29.156731Z","end":"2025-11-20T20:24:29.315333Z","steps":["trace[932727743] 'process raft request'  (duration: 158.264408ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.860975Z","caller":"traceutil/trace.go:172","msg":"trace[570114600] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"232.699788ms","start":"2025-11-20T20:24:38.628262Z","end":"2025-11-20T20:24:38.860962Z","steps":["trace[570114600] 'process raft request'  (duration: 232.584342ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:24:38.862428Z","caller":"traceutil/trace.go:172","msg":"trace[1632150606] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"194.825132ms","start":"2025-11-20T20:24:38.667594Z","end":"2025-11-20T20:24:38.862419Z","steps":["trace[1632150606] 'process raft request'  (duration: 194.764757ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:25:59.796917Z","caller":"traceutil/trace.go:172","msg":"trace[1018787678] transaction","detail":"{read_only:false; response_revision:1587; number_of_response:1; }","duration":"178.519957ms","start":"2025-11-20T20:25:59.618371Z","end":"2025-11-20T20:25:59.796891Z","steps":["trace[1018787678] 'process raft request'  (duration: 178.419059ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:26:07.306954Z","caller":"traceutil/trace.go:172","msg":"trace[1832150044] linearizableReadLoop","detail":"{readStateIndex:1696; appliedIndex:1696; }","duration":"207.161975ms","start":"2025-11-20T20:26:07.099774Z","end":"2025-11-20T20:26:07.306936Z","steps":["trace[1832150044] 'read index received'  (duration: 207.151183ms)","trace[1832150044] 'applied index is now lower than readState.Index'  (duration: 6.599µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:26:07.307088Z","caller":"traceutil/trace.go:172","msg":"trace[519307734] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"362.807072ms","start":"2025-11-20T20:26:06.944270Z","end":"2025-11-20T20:26:07.307077Z","steps":["trace[519307734] 'process raft request'  (duration: 362.695059ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:26:07.307192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.369314ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3725"}
	{"level":"info","ts":"2025-11-20T20:26:07.307216Z","caller":"traceutil/trace.go:172","msg":"trace[875135275] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:1621; }","duration":"207.439279ms","start":"2025-11-20T20:26:07.099770Z","end":"2025-11-20T20:26:07.307209Z","steps":["trace[875135275] 'agreement among raft nodes before linearized reading'  (duration: 207.290795ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:26:07.307851Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T20:26:06.944254Z","time spent":"362.881173ms","remote":"127.0.0.1:35880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3014,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:1620 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:2970 >> failure:<request_range:<key:\"/registry/pods/default/registry-test\" > >"}
	
	
	==> kernel <==
	 20:28:27 up 7 min,  0 users,  load average: 1.22, 1.94, 1.06
	Linux addons-947553 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3ce144c0d06eaba13972a77129478b309512cb60cac5599a59fddb6d928a47d2] <==
	W1120 20:23:00.364766       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:00.364849       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 20:23:00.364867       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1120 20:23:00.365762       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:00.365790       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1120 20:23:00.366969       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1120 20:23:34.247008       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	W1120 20:23:34.253741       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:23:34.253819       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1120 20:23:34.256485       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	E1120 20:23:34.259388       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	E1120 20:23:34.271232       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.97.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.97.199:443: connect: connection refused" logger="UnhandledError"
	I1120 20:23:34.434058       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1120 20:24:45.470175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50698: use of closed network connection
	E1120 20:24:45.698946       1 conn.go:339] Error on socket receive: read tcp 192.168.39.80:8443->192.168.39.1:50724: use of closed network connection
	I1120 20:24:55.153735       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.73.86"}
	I1120 20:25:35.271669       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1120 20:26:07.917022       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1120 20:26:08.188570       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.64.46"}
	
	
	==> kube-controller-manager [3f04fbc5a9a9df207d6597086b68edf7fb688fef37434c83a09a37653c2cf2be] <==
	I1120 20:21:49.551177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 20:21:49.551353       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:21:49.558938       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:21:49.560164       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:21:49.564482       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:21:49.572448       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:21:49.574897       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:21:49.579336       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:21:54.678834       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1120 20:21:58.672593       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1120 20:22:19.544397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:22:19.546674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1120 20:22:19.546720       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1120 20:22:19.600217       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1120 20:22:19.618675       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 20:22:19.646978       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:22:19.720013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1120 20:22:49.656241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:22:49.730478       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 20:23:19.661239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1120 20:23:19.740631       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1120 20:24:55.213061       1 replica_set.go:587] "Unhandled Error" err="sync \"headlamp/headlamp-6945c6f4d\" failed with pods \"headlamp-6945c6f4d-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I1120 20:24:58.991066       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1120 20:26:18.292121       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1120 20:26:30.134630       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	
	
	==> kube-proxy [107772b7cd3024e562140a2f0c499c3bc779e3c6da69fd459b0cda50a046bbcf] <==
	I1120 20:21:51.944081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:21:52.047283       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:21:52.059178       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E1120 20:21:52.063486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:21:52.317013       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:21:52.317608       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:21:52.319592       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:21:52.353676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:21:52.353988       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:21:52.354004       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:21:52.365989       1 config.go:200] "Starting service config controller"
	I1120 20:21:52.366010       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:21:52.373413       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:21:52.373476       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:21:52.373601       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:21:52.373606       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:21:52.404955       1 config.go:309] "Starting node config controller"
	I1120 20:21:52.405179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:21:52.405460       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:21:52.474183       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:21:52.474283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:21:52.570175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d2feff972c82ccae359067a5249488d229f00d95bee2d536ae297635b9c403b] <==
	E1120 20:21:42.658146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:42.658289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:42.658479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:42.659065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:21:42.659191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:42.659355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:42.659676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:21:42.660629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:43.501696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:21:43.568808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:21:43.596853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:21:43.607731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:21:43.612970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:21:43.637766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:21:43.650165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:21:43.687838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:21:43.786838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:21:43.825959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:21:43.878175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:21:43.895745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:21:43.953162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:21:43.991210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:21:44.021889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:21:44.053100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 20:21:46.731200       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:27:25 addons-947553 kubelet[1518]: E1120 20:27:25.666113    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670445665128435  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:27:29 addons-947553 kubelet[1518]: I1120 20:27:29.330115    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sl95v" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:27:32 addons-947553 kubelet[1518]: E1120 20:27:32.333370    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-nqz6v" podUID="3ac69ce1-c8e4-478b-bc45-5b450445f539"
	Nov 20 20:27:34 addons-947553 kubelet[1518]: E1120 20:27:34.123229    1518 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 20 20:27:34 addons-947553 kubelet[1518]: E1120 20:27:34.123581    1518 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 20 20:27:34 addons-947553 kubelet[1518]: E1120 20:27:34.124118    1518 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(261f896c-810b-4000-a18d-13ad1a4b0967): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:27:34 addons-947553 kubelet[1518]: E1120 20:27:34.125015    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="261f896c-810b-4000-a18d-13ad1a4b0967"
	Nov 20 20:27:34 addons-947553 kubelet[1518]: E1120 20:27:34.588639    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="261f896c-810b-4000-a18d-13ad1a4b0967"
	Nov 20 20:27:35 addons-947553 kubelet[1518]: E1120 20:27:35.669456    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670455669035562  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:27:35 addons-947553 kubelet[1518]: E1120 20:27:35.669481    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670455669035562  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:27:43 addons-947553 kubelet[1518]: E1120 20:27:43.333680    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-nqz6v" podUID="3ac69ce1-c8e4-478b-bc45-5b450445f539"
	Nov 20 20:27:45 addons-947553 kubelet[1518]: E1120 20:27:45.671886    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670465671366497  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:27:45 addons-947553 kubelet[1518]: E1120 20:27:45.671932    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670465671366497  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:27:54 addons-947553 kubelet[1518]: E1120 20:27:54.332753    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-nqz6v" podUID="3ac69ce1-c8e4-478b-bc45-5b450445f539"
	Nov 20 20:27:55 addons-947553 kubelet[1518]: E1120 20:27:55.675150    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670475674792586  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:27:55 addons-947553 kubelet[1518]: E1120 20:27:55.675173    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670475674792586  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:28:05 addons-947553 kubelet[1518]: E1120 20:28:05.678780    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670485677463888  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:28:05 addons-947553 kubelet[1518]: E1120 20:28:05.678825    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670485677463888  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:28:06 addons-947553 kubelet[1518]: E1120 20:28:06.333537    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-nqz6v" podUID="3ac69ce1-c8e4-478b-bc45-5b450445f539"
	Nov 20 20:28:15 addons-947553 kubelet[1518]: E1120 20:28:15.682055    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670495681580935  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:28:15 addons-947553 kubelet[1518]: E1120 20:28:15.682109    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670495681580935  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:28:20 addons-947553 kubelet[1518]: E1120 20:28:20.331862    1518 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-nqz6v" podUID="3ac69ce1-c8e4-478b-bc45-5b450445f539"
	Nov 20 20:28:22 addons-947553 kubelet[1518]: I1120 20:28:22.329921    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:28:25 addons-947553 kubelet[1518]: E1120 20:28:25.685264    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763670505684666806  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	Nov 20 20:28:25 addons-947553 kubelet[1518]: E1120 20:28:25.685295    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763670505684666806  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:484697}  inodes_used:{value:176}}"
	
	
	==> storage-provisioner [1f0a03ae88dd249a7e71bb7d2aca8324b4022ac152aea28c7e5c522a6fb23806] <==
	W1120 20:28:02.704213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:04.708677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:04.717161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:06.721790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:06.727897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:08.730915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:08.739593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:10.743586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:10.748938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:12.753040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:12.758646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:14.763451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:14.771800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:16.776313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:16.782647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:18.786778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:18.794083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:20.797844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:20.807961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:22.811395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:22.818036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:24.822604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:24.827877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:26.833977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:28:26.850016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-947553 -n addons-947553
helpers_test.go:269: (dbg) Run:  kubectl --context addons-947553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a yakd-dashboard-5ff678cb9-nqz6v
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a yakd-dashboard-5ff678cb9-nqz6v
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a yakd-dashboard-5ff678cb9-nqz6v: exit status 1 (108.919722ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:26:08 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8bvn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s8bvn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m20s                default-scheduler  Successfully assigned default/nginx to addons-947553
	  Warning  Failed     54s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     54s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    54s                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     54s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    40s (x2 over 2m20s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-947553/192.168.39.80
	Start Time:       Thu, 20 Nov 2025 20:25:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw89l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mw89l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  2m59s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-947553
	  Warning  Failed     115s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     115s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    114s                  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     114s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    100s (x2 over 2m59s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7w87 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-w7w87:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-whk72" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xqmtg" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-nqz6v" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-947553 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-whk72 ingress-nginx-admission-patch-xqmtg helper-pod-create-pvc-edf6ba20-b020-4e81-8975-a2c9a9b8cd4a yakd-dashboard-5ff678cb9-nqz6v: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 addons disable yakd --alsologtostderr -v=1: (5.756700714s)
--- FAIL: TestAddons/parallel/Yakd (128.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-933412 --alsologtostderr -v=1]
E1120 20:40:58.268093    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:42:20.189723    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:44:36.327551    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:45:04.031387    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-933412 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-933412 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-933412 --alsologtostderr -v=1] stderr:
I1120 20:40:32.092971   18527 out.go:360] Setting OutFile to fd 1 ...
I1120 20:40:32.093274   18527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:40:32.093284   18527 out.go:374] Setting ErrFile to fd 2...
I1120 20:40:32.093288   18527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:40:32.093479   18527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
I1120 20:40:32.093725   18527 mustload.go:66] Loading cluster: functional-933412
I1120 20:40:32.094118   18527 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:40:32.095928   18527 host.go:66] Checking if "functional-933412" exists ...
I1120 20:40:32.096105   18527 api_server.go:166] Checking apiserver status ...
I1120 20:40:32.096143   18527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1120 20:40:32.098634   18527 main.go:143] libmachine: domain functional-933412 has defined MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:40:32.099074   18527 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:98:26", ip: ""} in network mk-functional-933412: {Iface:virbr1 ExpiryTime:2025-11-20 21:36:50 +0000 UTC Type:0 Mac:52:54:00:aa:98:26 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:functional-933412 Clientid:01:52:54:00:aa:98:26}
I1120 20:40:32.099095   18527 main.go:143] libmachine: domain functional-933412 has defined IP address 192.168.39.212 and MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:40:32.099279   18527 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/functional-933412/id_rsa Username:docker}
I1120 20:40:32.191394   18527 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6929/cgroup
W1120 20:40:32.206372   18527 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6929/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1120 20:40:32.206426   18527 ssh_runner.go:195] Run: ls
I1120 20:40:32.212543   18527 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8441/healthz ...
I1120 20:40:32.218591   18527 api_server.go:279] https://192.168.39.212:8441/healthz returned 200:
ok
W1120 20:40:32.218633   18527 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1120 20:40:32.218778   18527 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:40:32.218789   18527 addons.go:70] Setting dashboard=true in profile "functional-933412"
I1120 20:40:32.218795   18527 addons.go:239] Setting addon dashboard=true in "functional-933412"
I1120 20:40:32.218816   18527 host.go:66] Checking if "functional-933412" exists ...
I1120 20:40:32.222176   18527 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1120 20:40:32.223382   18527 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1120 20:40:32.224468   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1120 20:40:32.224481   18527 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1120 20:40:32.227111   18527 main.go:143] libmachine: domain functional-933412 has defined MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:40:32.227505   18527 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:98:26", ip: ""} in network mk-functional-933412: {Iface:virbr1 ExpiryTime:2025-11-20 21:36:50 +0000 UTC Type:0 Mac:52:54:00:aa:98:26 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:functional-933412 Clientid:01:52:54:00:aa:98:26}
I1120 20:40:32.227526   18527 main.go:143] libmachine: domain functional-933412 has defined IP address 192.168.39.212 and MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:40:32.227695   18527 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/functional-933412/id_rsa Username:docker}
I1120 20:40:32.326683   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1120 20:40:32.326746   18527 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1120 20:40:32.350240   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1120 20:40:32.350264   18527 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1120 20:40:32.374797   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1120 20:40:32.374829   18527 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1120 20:40:32.398571   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1120 20:40:32.398595   18527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1120 20:40:32.424339   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1120 20:40:32.424368   18527 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1120 20:40:32.449228   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1120 20:40:32.449323   18527 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1120 20:40:32.473137   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1120 20:40:32.473161   18527 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1120 20:40:32.496354   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1120 20:40:32.496387   18527 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1120 20:40:32.520260   18527 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1120 20:40:32.520296   18527 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1120 20:40:32.545472   18527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1120 20:40:33.340098   18527 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-933412 addons enable metrics-server

                                                
                                                
I1120 20:40:33.341398   18527 addons.go:202] Writing out "functional-933412" config to set dashboard=true...
W1120 20:40:33.341708   18527 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1120 20:40:33.342624   18527 kapi.go:59] client config for functional-933412: &rest.Config{Host:"https://192.168.39.212:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.key", CAFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1120 20:40:33.343263   18527 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1120 20:40:33.343288   18527 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1120 20:40:33.343295   18527 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1120 20:40:33.343301   18527 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1120 20:40:33.343307   18527 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1120 20:40:33.352220   18527 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  50b59604-85a1-4910-b866-3c25c71c5a7f 824 0 2025-11-20 20:40:33 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-20 20:40:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.87.218,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.87.218],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1120 20:40:33.352372   18527 out.go:285] * Launching proxy ...
* Launching proxy ...
I1120 20:40:33.352447   18527 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-933412 proxy --port 36195]
I1120 20:40:33.352835   18527 dashboard.go:159] Waiting for kubectl to output host:port ...
I1120 20:40:33.395633   18527 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1120 20:40:33.395697   18527 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1120 20:40:33.407028   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8001fd7e-c36b-441a-a980-9fd9c44d1a89] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170c3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6280 TLS:<nil>}
I1120 20:40:33.407109   18527 retry.go:31] will retry after 80.181µs: Temporary Error: unexpected response code: 503
I1120 20:40:33.410988   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04c27456-8bce-43c6-b909-5be65153f79d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001630a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b0f00 TLS:<nil>}
I1120 20:40:33.411055   18527 retry.go:31] will retry after 107.698µs: Temporary Error: unexpected response code: 503
I1120 20:40:33.415156   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[90ee28b0-a9c4-4f3d-964f-12947b3f7645] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170c500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6500 TLS:<nil>}
I1120 20:40:33.415224   18527 retry.go:31] will retry after 181.776µs: Temporary Error: unexpected response code: 503
I1120 20:40:33.419404   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b827b80a-b687-45b5-acc9-18c539ae77cb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001532400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1040 TLS:<nil>}
I1120 20:40:33.419467   18527 retry.go:31] will retry after 359.416µs: Temporary Error: unexpected response code: 503
I1120 20:40:33.423162   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[78d26cda-62bd-47cf-abce-b32c4d120eef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170c600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I1120 20:40:33.423222   18527 retry.go:31] will retry after 299.169µs: Temporary Error: unexpected response code: 503
I1120 20:40:33.428130   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[346b9af4-dd49-4226-98c5-b4923525ca87] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001532540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1180 TLS:<nil>}
I1120 20:40:33.428186   18527 retry.go:31] will retry after 1.09959ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.433826   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93f2e26c-ebf0-4dfa-a2bd-8be7352c4704] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170c6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002063c0 TLS:<nil>}
I1120 20:40:33.433912   18527 retry.go:31] will retry after 1.353566ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.438551   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8a28f47-1140-4ed0-a26b-934f61e6a9a3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170c780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b12c0 TLS:<nil>}
I1120 20:40:33.438613   18527 retry.go:31] will retry after 1.566705ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.443201   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2c21519f-20b4-4fb5-b7a4-8121fb3af180] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001532640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1400 TLS:<nil>}
I1120 20:40:33.443246   18527 retry.go:31] will retry after 1.477099ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.447652   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6092d8f-9e74-490d-b378-886432b0ab4b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170c880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206500 TLS:<nil>}
I1120 20:40:33.447695   18527 retry.go:31] will retry after 3.981687ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.454340   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[71ece009-0116-41ba-9c70-87f597307742] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001532740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1540 TLS:<nil>}
I1120 20:40:33.454383   18527 retry.go:31] will retry after 6.092052ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.463435   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b2f24b84-a6e0-4e76-a1f1-7fea7052733e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170c980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I1120 20:40:33.463494   18527 retry.go:31] will retry after 12.502214ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.479956   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[87bc8eed-96ed-4587-9a1d-52cfac0c602e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001532840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1680 TLS:<nil>}
I1120 20:40:33.480040   18527 retry.go:31] will retry after 14.446864ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.498458   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3514f805-e602-4216-9742-f953426d0ad3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170cac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206780 TLS:<nil>}
I1120 20:40:33.498527   18527 retry.go:31] will retry after 10.314312ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.513236   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0250052e-21e6-4590-948b-cbfc80b96ec3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001532900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b17c0 TLS:<nil>}
I1120 20:40:33.513297   18527 retry.go:31] will retry after 18.764833ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.535858   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[554bf73a-5534-4541-8ec5-3944b2cc9c21] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001630bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I1120 20:40:33.535908   18527 retry.go:31] will retry after 55.28489ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.595605   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a71e009e-a9d9-4c6a-a903-982a4ebe694b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170cbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6640 TLS:<nil>}
I1120 20:40:33.595687   18527 retry.go:31] will retry after 97.889863ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.697611   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d11d61b-b5df-4d8e-9e32-26bb188f513e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001630cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1900 TLS:<nil>}
I1120 20:40:33.697685   18527 retry.go:31] will retry after 130.707649ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.832732   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b5dcc1e-bba7-4236-8e18-cbc9fa397457] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc00170ccc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b68c0 TLS:<nil>}
I1120 20:40:33.832790   18527 retry.go:31] will retry after 90.799932ms: Temporary Error: unexpected response code: 503
I1120 20:40:33.926906   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c51772e4-ec41-49af-ac1b-1ddfd0b08fa1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:33 GMT]] Body:0xc001630e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1a40 TLS:<nil>}
I1120 20:40:33.926964   18527 retry.go:31] will retry after 242.882501ms: Temporary Error: unexpected response code: 503
I1120 20:40:34.173365   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[186eca52-ee83-4992-8a77-a11fb4d0aa42] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:34 GMT]] Body:0xc001532a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6a00 TLS:<nil>}
I1120 20:40:34.173433   18527 retry.go:31] will retry after 185.982834ms: Temporary Error: unexpected response code: 503
I1120 20:40:34.362924   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79e4e26d-d3b1-4781-96ff-060249faecc7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:34 GMT]] Body:0xc001532ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1120 20:40:34.362997   18527 retry.go:31] will retry after 572.558913ms: Temporary Error: unexpected response code: 503
I1120 20:40:34.939950   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f662091b-2add-4c12-8552-29f8e9cf298e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:34 GMT]] Body:0xc001630f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1120 20:40:34.940028   18527 retry.go:31] will retry after 554.761274ms: Temporary Error: unexpected response code: 503
I1120 20:40:35.498772   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69bb0a2c-7be8-4390-92e7-6d672fe9274d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:35 GMT]] Body:0xc00170cd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6dc0 TLS:<nil>}
I1120 20:40:35.498828   18527 retry.go:31] will retry after 1.340072225s: Temporary Error: unexpected response code: 503
I1120 20:40:36.843477   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa1269e5-c8e4-4180-9f9a-f1b86ebb7e00] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:36 GMT]] Body:0xc001631000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1b80 TLS:<nil>}
I1120 20:40:36.843556   18527 retry.go:31] will retry after 918.904658ms: Temporary Error: unexpected response code: 503
I1120 20:40:37.766034   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd05dece-67f0-4716-ba1f-f2501bb25a14] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:37 GMT]] Body:0xc001532c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6f00 TLS:<nil>}
I1120 20:40:37.766115   18527 retry.go:31] will retry after 2.84248298s: Temporary Error: unexpected response code: 503
I1120 20:40:40.614128   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc0de6a9-5621-49d6-9eb8-f2fbe7f1917e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:40 GMT]] Body:0xc001631100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1120 20:40:40.614193   18527 retry.go:31] will retry after 5.136169755s: Temporary Error: unexpected response code: 503
I1120 20:40:45.754118   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[89c2fcfa-c47e-4f8f-bd06-0ef8628d3c5a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:45 GMT]] Body:0xc001532d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1cc0 TLS:<nil>}
I1120 20:40:45.754196   18527 retry.go:31] will retry after 8.066505029s: Temporary Error: unexpected response code: 503
I1120 20:40:53.825015   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[530f18c1-a8d2-409f-b493-c41aa92f5fd9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:40:53 GMT]] Body:0xc00170cf00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1120 20:40:53.825079   18527 retry.go:31] will retry after 11.049035381s: Temporary Error: unexpected response code: 503
I1120 20:41:04.878421   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc680bc0-2579-4563-9cb3-6209ff5d3fff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:41:04 GMT]] Body:0xc0016311c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004b1e00 TLS:<nil>}
I1120 20:41:04.878493   18527 retry.go:31] will retry after 6.489785819s: Temporary Error: unexpected response code: 503
I1120 20:41:11.375905   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[65c75570-b15b-4fe1-9eda-92c739e8f26e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:41:11 GMT]] Body:0xc00170d000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7040 TLS:<nil>}
I1120 20:41:11.375958   18527 retry.go:31] will retry after 22.691282028s: Temporary Error: unexpected response code: 503
I1120 20:41:34.070757   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[33ba07a9-dec8-4ec2-a385-05708d5d18ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:41:34 GMT]] Body:0xc001532dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000128000 TLS:<nil>}
I1120 20:41:34.070825   18527 retry.go:31] will retry after 29.497073495s: Temporary Error: unexpected response code: 503
I1120 20:42:03.572514   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66dbc5f1-a46c-401f-8c22-0119a9286162] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:42:03 GMT]] Body:0xc001532e80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1120 20:42:03.572572   18527 retry.go:31] will retry after 22.118720703s: Temporary Error: unexpected response code: 503
I1120 20:42:25.696633   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5a19fb9-8e41-4540-af76-c281bdd93f11] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:42:25 GMT]] Body:0xc00170d0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002077c0 TLS:<nil>}
I1120 20:42:25.696688   18527 retry.go:31] will retry after 1m1.984695829s: Temporary Error: unexpected response code: 503
I1120 20:43:27.685319   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6278b7d2-8717-4248-abc9-df43ae53f325] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:43:27 GMT]] Body:0xc001532040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6000 TLS:<nil>}
I1120 20:43:27.685394   18527 retry.go:31] will retry after 1m0.32414795s: Temporary Error: unexpected response code: 503
I1120 20:44:28.014636   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fd4491db-bce3-4936-8481-d71254fe7b4e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:44:27 GMT]] Body:0xc00170c080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000128140 TLS:<nil>}
I1120 20:44:28.014730   18527 retry.go:31] will retry after 58.881300283s: Temporary Error: unexpected response code: 503
I1120 20:45:26.903104   18527 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5774d407-0795-4e73-8023-a4dddddaeb84] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 20 Nov 2025 20:45:26 GMT]] Body:0xc00170c080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000128280 TLS:<nil>}
I1120 20:45:26.903225   18527 retry.go:31] will retry after 54.977040746s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-933412 -n functional-933412
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 logs -n 25: (1.553188438s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-933412 ssh -- ls -la /mount-9p                                                                                         │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:39 UTC │ 20 Nov 25 20:39 UTC │
	│ ssh       │ functional-933412 ssh cat /mount-9p/test-1763671161701273709                                                                      │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:39 UTC │ 20 Nov 25 20:39 UTC │
	│ ssh       │ functional-933412 ssh stat /mount-9p/created-by-test                                                                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh stat /mount-9p/created-by-pod                                                                               │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh sudo umount -f /mount-9p                                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdspecific-port3575881837/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ ssh       │ functional-933412 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh -- ls -la /mount-9p                                                                                         │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh sudo umount -f /mount-9p                                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ ssh       │ functional-933412 ssh findmnt -T /mount1                                                                                          │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount1 --alsologtostderr -v=1                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount2 --alsologtostderr -v=1                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount3 --alsologtostderr -v=1                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ ssh       │ functional-933412 ssh findmnt -T /mount1                                                                                          │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh findmnt -T /mount2                                                                                          │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh findmnt -T /mount3                                                                                          │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ mount     │ -p functional-933412 --kill=true                                                                                                  │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ start     │ -p functional-933412 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ start     │ -p functional-933412 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ start     │ -p functional-933412 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-933412 --alsologtostderr -v=1                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ image     │ functional-933412 image load --daemon kicbase/echo-server:functional-933412 --alsologtostderr                                     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image     │ functional-933412 image ls                                                                                                        │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image     │ functional-933412 image load --daemon kicbase/echo-server:functional-933412 --alsologtostderr                                     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:40:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:40:31.985595   18511 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:40:31.985883   18511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.985893   18511 out.go:374] Setting ErrFile to fd 2...
	I1120 20:40:31.985900   18511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.986117   18511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:40:31.986549   18511 out.go:368] Setting JSON to false
	I1120 20:40:31.987404   18511 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1382,"bootTime":1763669850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:40:31.987455   18511 start.go:143] virtualization: kvm guest
	I1120 20:40:31.989149   18511 out.go:179] * [functional-933412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:40:31.990360   18511 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:40:31.990348   18511 notify.go:221] Checking for updates...
	I1120 20:40:31.992108   18511 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:40:31.993326   18511 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:40:31.994559   18511 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:40:31.999027   18511 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:40:32.000067   18511 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:40:32.001461   18511 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:40:32.001869   18511 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:40:32.032149   18511 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 20:40:32.033200   18511 start.go:309] selected driver: kvm2
	I1120 20:40:32.033212   18511 start.go:930] validating driver "kvm2" against &{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:32.033322   18511 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:40:32.034235   18511 cni.go:84] Creating CNI manager for ""
	I1120 20:40:32.034287   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:40:32.034333   18511 start.go:353] cluster config:
	{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:32.035594   18511 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.938103903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a30630c9-7837-4cbf-9220-dd15342d5142 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.938197542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a30630c9-7837-4cbf-9220-dd15342d5142 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.940276655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0b23408-9b68-4fd4-862b-2f615820a35d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.940906522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671532940882385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175579,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0b23408-9b68-4fd4-862b-2f615820a35d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.941929839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50aadcff-6291-48e4-9d8f-af5b4c99b998 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.942465893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50aadcff-6291-48e4-9d8f-af5b4c99b998 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.942843049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50aadcff-6291-48e4-9d8f-af5b4c99b998 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.972603024Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3145a456-5b4a-4796-a555-3fad84bc74b7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.973765304Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7e3dea60d9a009c307073509e75cbdc9422eb4e6081124f1caa0f96d976c1a15,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-97v7k,Uid:4811a1cd-b896-49a4-8130-01de71cc2b82,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763671235326978444,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-97v7k,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4811a1cd-b896-49a4-8130-01de71cc2b82,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:40:33.202187962Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:5653e98f6f59db11811bd1aa4a4a7b0231e75f2fb2b334050d36
53dbb503a2b1,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-w4799,Uid:753b6014-a9c2-4e38-9016-1adac90b4a77,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763671235000980063,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-w4799,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 753b6014-a9c2-4e38-9016-1adac90b4a77,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:40:33.183932305Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b7fc047b46d4871205dbd6105ec23e1a9810ec54dbc5f18d6dc16129c517bbf,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:3c972cf3-8435-4a39-8c33-cc134f096e49,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763671168297878646,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernete
s.pod.namespace: default,io.kubernetes.pod.uid: 3c972cf3-8435-4a39-8c33-cc134f096e49,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-11-20T20:39:27.980099241Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee0266ff9a036abcd2ff497f12fe938bac48a11b5445d165b154b21deb23b49b,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-2dthj,Uid:be97d2c4-1a44-4335-89c3-8e28cceea1a0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763671162696465124,Labels:map[string]string{app: hello-node,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-2dthj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be97d2c4-1a44-4335-89c3-8e28cceea1a0,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:39:20.865565232Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f4228738bf83a3f9c5dba586075541ebd2839bec69eef24174798ed47d3f5ce6,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-ppbrm,Uid:b5061d2e-b0e6-491a-8b57-c22e7f8adc92,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763671162612445568,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-ppbrm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b5061d2e-b0e6-491a-8b57-c22e7f8adc92,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:39:20.794268146Z,kubernetes.io/config.source: api,},RuntimeHa
ndler:,},&PodSandbox{Id:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-933412,Uid:72af6a97b1729f619e770ceba1822a32,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763671133614492658,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.212:8441,kubernetes.io/config.hash: 72af6a97b1729f619e770ceba1822a32,kubernetes.io/config.seen: 2025-11-20T20:38:52.932925238Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:903793d0-3eb5-4f21-a0c6-580ef4002705,Namespace:kube-system,At
tempt:2,},State:SANDBOX_READY,CreatedAt:1763671114245594383,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{
\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-20T20:37:51.201035797Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2b7p9,Uid:952cefb1-c3e7-481c-bb72-d7f96fde7bd9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1763671111578241178,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:37:51.201036993Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&PodSandboxMetadata{Name:etcd-functional-933412,Uid:f0e662c9a19ce79799b109de1b1f4882,Namespace:
kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1763671107942115994,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.212:2379,kubernetes.io/config.hash: f0e662c9a19ce79799b109de1b1f4882,kubernetes.io/config.seen: 2025-11-20T20:37:47.202383745Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-933412,Uid:886f9cf737a5ee47db4d863c4c536829,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1763671107922383497,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 886f9cf737a5ee47db4d863c4c536829,kubernetes.io/config.seen: 2025-11-20T20:37:47.202379869Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-933412,Uid:3296ba79b34d04995df213bff57a006e,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1763671107889993143,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3296ba79b34d04995df213bff57a006e,kubernetes.io/config.seen: 2025-11-20T20:37:47.202385580Z,kubernetes.io/conf
ig.source: file,},RuntimeHandler:,},&PodSandbox{Id:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&PodSandboxMetadata{Name:kube-proxy-6xnj6,Uid:19e68395-250c-46fc-8028-1e9e2456ac3b,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1763671107881839316,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-20T20:37:51.201033428Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3145a456-5b4a-4796-a555-3fad84bc74b7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.975131718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef716c24-15ae-4eda-ab46-680ec9781da2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.975207430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef716c24-15ae-4eda-ab46-680ec9781da2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.975393247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Cr
eatedAt:1763671111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef716c24-15ae-4eda-ab46-680ec9781da2 name=/runtime.v1.RuntimeService/ListCon
tainers
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.984518353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0872b42f-9cf2-4a65-bb3f-6981cb3fbac5 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.984581846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0872b42f-9cf2-4a65-bb3f-6981cb3fbac5 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.986941565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95f1f139-e721-4fed-a258-02855f79f9cb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.988305696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671532988107751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175579,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95f1f139-e721-4fed-a258-02855f79f9cb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.989936262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae3d3b9e-7bd4-4cd5-89de-8a4288ae3fae name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.990020581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae3d3b9e-7bd4-4cd5-89de-8a4288ae3fae name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:32 functional-933412 crio[5444]: time="2025-11-20 20:45:32.990302124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae3d3b9e-7bd4-4cd5-89de-8a4288ae3fae name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:33 functional-933412 crio[5444]: time="2025-11-20 20:45:33.008580268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5e2e64b-8a2d-41b6-8822-bb42cb6683e3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:33 functional-933412 crio[5444]: time="2025-11-20 20:45:33.010074131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671533010046737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175579,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5e2e64b-8a2d-41b6-8822-bb42cb6683e3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:33 functional-933412 crio[5444]: time="2025-11-20 20:45:33.011433669Z" level=debug msg="Request: &ListImagesRequest{Filter:&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},},}" file="otel-collector/interceptors.go:62" id=dd7bdd27-3cea-4ac5-895c-ca792da9e7fa name=/runtime.v1.ImageService/ListImages
	Nov 20 20:45:33 functional-933412 crio[5444]: time="2025-11-20 20:45:33.013098037Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,RepoTags:[registry.k8s.io/kube-apiserver:v1.34.1],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964 registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902],Size_:89046001,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,RepoTags:[registry.k8s.io/kube-controller-manager:v1.34.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89 registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992],Size_:76004181,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&
Image{Id:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,RepoTags:[registry.k8s.io/kube-scheduler:v1.34.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31 registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500],Size_:53844823,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,RepoTags:[registry.k8s.io/kube-proxy:v1.34.1],RepoDigests:[registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a],Size_:73138073,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f,RepoTags:[registry.k8s.io/pause:3.10.1],RepoDigests:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c800027
67d24c registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41],Size_:742092,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,RepoTags:[registry.k8s.io/etcd:3.6.4-0],RepoDigests:[registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19],Size_:195976448,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,RepoTags:[registry.k8s.io/coredns/coredns:v1.12.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998 registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c],Size_:76103547,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,RepoTags:[docker.io/kindest/kindnetd:v20250512-df8de77b],RepoDigests:[docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11],Size_:109379124,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e,RepoTags:[registry.k8s.io/pause:3.1],RepoDigests:[registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e],Size_:746911,Uid:nil,
Username:,Spec:nil,Pinned:false,},&Image{Id:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da,RepoTags:[registry.k8s.io/pause:3.3],RepoDigests:[registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04],Size_:686139,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e99081a6baf88fed1e7c035febe490c6260ee43d6252e259470fe0ed1efc2e43,RepoTags:[localhost/minikube-local-cache-test:functional-933412],RepoDigests:[localhost/minikube-local-cache-test@sha256:e0ff1238b00c2e7fe3c834b0b3f4d32268852e3ed109461619a68873527289fc],Size_:3330,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06,RepoTags:[registry.k8s.io/pause:latest],RepoDigests:[registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9],Size_:247077,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-gl
ibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[localhost/kicbase/echo-server:functional-933412],RepoDigests:[localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf],Size_:4943877,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=dd7bdd27-3cea-4ac5-895c-ca792da9e7fa name=/runtime.v1.ImageService/ListImages
	Nov 20 20:45:33 functional-933412 crio[5444]: time="2025-11-20 20:45:33.019791066Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Nov 20 20:45:33 functional-933412 crio[5444]: time="2025-11-20 20:45:33.036099766Z" level=debug msg="Too many requests to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: sleeping for 4.000000 seconds before next attempt" file="docker/docker_client.go:596"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	109d4bb80eac7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   9d5410f166b27       busybox-mount                               default
	2742e68c74423       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                3                   1f6a7ab0c91ba       kube-proxy-6xnj6                            kube-system
	673e5b087a6e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       3                   f1635aeef3e54       storage-provisioner                         kube-system
	564bc1707bc93       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   e94448a6ba4f6       kube-apiserver-functional-933412            kube-system
	90434f3698428       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   3                   7026443238fe7       kube-controller-manager-functional-933412   kube-system
	22f1327d1dafb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            3                   512c473573a1e       kube-scheduler-functional-933412            kube-system
	52d08d12cb18e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      3                   d61277493dd5b       etcd-functional-933412                      kube-system
	01fbf4a1da609       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       2                   f1635aeef3e54       storage-provisioner                         kube-system
	71d72227e095d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Running             coredns                   2                   abafba1840584       coredns-66bc5c9577-2b7p9                    kube-system
	e2153be7a1118       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      2                   d61277493dd5b       etcd-functional-933412                      kube-system
	ac98f1d3b4d98       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      7 minutes ago       Exited              kube-controller-manager   2                   7026443238fe7       kube-controller-manager-functional-933412   kube-system
	06fbf273e2f27       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            2                   512c473573a1e       kube-scheduler-functional-933412            kube-system
	2a0c4fbb9b5d7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                2                   1f6a7ab0c91ba       kube-proxy-6xnj6                            kube-system
	2f33930aaa277       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   1                   a27089054680f       coredns-66bc5c9577-2b7p9                    kube-system
	
	
	==> coredns [2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40383 - 36748 "HINFO IN 5820690942743418349.4099311619110396990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022726285s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0] <==
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45883 - 50568 "HINFO IN 5877995768124618261.4169147699699731941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.421086977s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41888->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41918->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41904->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-933412
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-933412
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=functional-933412
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-933412
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:45:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:40:59 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:40:59 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:40:59 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:40:59 +0000   Thu, 20 Nov 2025 20:37:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    functional-933412
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 acb893993e724fd68d51aa75dbb6007a
	  System UUID:                acb89399-3e72-4fd6-8d51-aa75dbb6007a
	  Boot ID:                    8a667e28-1db3-4eb5-acb8-0cecc80439c5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2dthj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  default                     hello-node-connect-7d85dfc575-ppbrm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 coredns-66bc5c9577-2b7p9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m14s
	  kube-system                 etcd-functional-933412                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m21s
	  kube-system                 kube-apiserver-functional-933412              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-controller-manager-functional-933412     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-proxy-6xnj6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-scheduler-functional-933412              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-97v7k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-w4799         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m13s                  kube-proxy       
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  Starting                 7m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m20s                  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m20s                  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m20s                  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m20s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m19s                  kubelet          Node functional-933412 status is now: NodeReady
	  Normal  RegisteredNode           8m15s                  node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	  Normal  NodeHasNoDiskPressure    7m46s (x8 over 7m46s)  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m46s (x8 over 7m46s)  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m46s (x7 over 7m46s)  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m39s                  node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	  Normal  Starting                 6m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m40s (x8 over 6m40s)  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x8 over 6m40s)  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x7 over 6m40s)  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m34s                  node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	
	
	==> dmesg <==
	[  +0.002100] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.203979] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081853] kauditd_printk_skb: 1 callbacks suppressed
	[Nov20 20:37] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.149691] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.660350] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.929408] kauditd_printk_skb: 249 callbacks suppressed
	[  +0.111579] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.014171] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.566090] kauditd_printk_skb: 176 callbacks suppressed
	[Nov20 20:38] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.114496] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.272570] kauditd_printk_skb: 182 callbacks suppressed
	[  +2.672253] kauditd_printk_skb: 229 callbacks suppressed
	[  +6.943960] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.127034] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.126294] kauditd_printk_skb: 121 callbacks suppressed
	[Nov20 20:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.055294] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.001227] kauditd_printk_skb: 104 callbacks suppressed
	[ +25.129996] kauditd_printk_skb: 26 callbacks suppressed
	[Nov20 20:40] kauditd_printk_skb: 29 callbacks suppressed
	[Nov20 20:41] kauditd_printk_skb: 68 callbacks suppressed
	
	
	==> etcd [52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4] <==
	{"level":"warn","ts":"2025-11-20T20:38:55.504876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.517219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.522964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.537892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.556105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.570738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.578629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.584498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.598770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.615999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.627387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.635890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.644879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.658598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.671138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.678504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.686422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.693713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.703372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.716480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.726224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.743373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.760117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.770191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.861274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46834","server-name":"","error":"EOF"}
	
	
	==> etcd [e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952] <==
	{"level":"info","ts":"2025-11-20T20:38:29.355734Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","recovered-remote-peer-id":"eed9c28654b6490f","recovered-remote-peer-urls":["https://192.168.39.212:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:38:29.356742Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.356755Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-20T20:38:29.356796Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-20T20:38:29.356888Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"eed9c28654b6490f switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-20T20:38:29.356945Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"eed9c28654b6490f became follower at term 3"}
	{"level":"info","ts":"2025-11-20T20:38:29.356975Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft eed9c28654b6490f [peers: [], term: 3, commit: 552, applied: 0, lastindex: 552, lastterm: 3]"}
	{"level":"warn","ts":"2025-11-20T20:38:29.364020Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-20T20:38:29.391830Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":511}
	{"level":"info","ts":"2025-11-20T20:38:29.401745Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-20T20:38:29.403842Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"eed9c28654b6490f","timeout":"7s"}
	{"level":"info","ts":"2025-11-20T20:38:29.405934Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"eed9c28654b6490f"}
	{"level":"info","ts":"2025-11-20T20:38:29.406004Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"eed9c28654b6490f","local-server-version":"3.6.4","cluster-id":"f8d3b95e5bbb719c","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.406949Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"eed9c28654b6490f","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407041Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407109Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407119Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407311Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"eed9c28654b6490f switched to configuration voters=(17211001333175699727)"}
	{"level":"info","ts":"2025-11-20T20:38:29.407396Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","added-peer-id":"eed9c28654b6490f","added-peer-peer-urls":["https://192.168.39.212:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:38:29.407490Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.414334Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T20:38:29.418300Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"eed9c28654b6490f","initial-advertise-peer-urls":["https://192.168.39.212:2380"],"listen-peer-urls":["https://192.168.39.212:2380"],"advertise-client-urls":["https://192.168.39.212:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.212:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T20:38:29.418370Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T20:38:29.418498Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2025-11-20T20:38:29.418530Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.212:2380"}
	
	
	==> kernel <==
	 20:45:33 up 8 min,  0 users,  load average: 0.24, 0.46, 0.33
	Linux functional-933412 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a] <==
	I1120 20:38:56.695513       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 20:38:56.697230       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 20:38:56.715837       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 20:38:56.717113       1 aggregator.go:171] initial CRD sync complete...
	I1120 20:38:56.717416       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 20:38:56.717505       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 20:38:56.717522       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:38:56.725574       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:38:56.725821       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1120 20:38:56.748285       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 20:38:57.026303       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:38:57.489472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:38:58.325699       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:38:58.364821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:38:58.390567       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:38:58.397107       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:39:00.061709       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:39:00.112371       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:39:00.362096       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:39:16.327502       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.145.237"}
	I1120 20:39:20.910863       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.37.109"}
	I1120 20:39:20.942215       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.60.10"}
	I1120 20:40:32.961131       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:40:33.299327       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.87.218"}
	I1120 20:40:33.323787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.190.79"}
	
	
	==> kube-controller-manager [90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17] <==
	I1120 20:38:59.997635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:39:00.006780       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 20:39:00.006997       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 20:39:00.007087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:39:00.007130       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:39:00.007148       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:39:00.007149       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:39:00.008479       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:39:00.009407       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 20:39:00.012086       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:39:00.012538       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:39:00.019115       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 20:39:00.032896       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 20:39:00.039254       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 20:39:00.041533       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:39:00.042922       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:39:00.048280       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 20:39:00.055113       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1120 20:40:33.053616       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.093742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.097347       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.109865       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.113801       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.127931       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.129241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cdc6e2759d64e3b6d] <==
	
	
	==> kube-proxy [2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262] <==
	I1120 20:38:57.537634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:38:57.639095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:38:57.639246       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.212"]
	E1120 20:38:57.639542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:38:57.744439       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:38:57.744545       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:38:57.744583       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:38:57.758377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:38:57.760174       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:38:57.760235       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:38:57.767043       1 config.go:200] "Starting service config controller"
	I1120 20:38:57.767096       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:38:57.767126       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:38:57.767140       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:38:57.767160       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:38:57.767173       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:38:57.767955       1 config.go:309] "Starting node config controller"
	I1120 20:38:57.767998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:38:57.768014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:38:57.867737       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:38:57.867768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:38:57.867742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae] <==
	I1120 20:38:28.682607       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:38:28.797288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1120 20:38:28.818903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-933412&limit=500&resourceVersion=0\": dial tcp 192.168.39.212:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:38:40.372759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-933412&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99] <==
	I1120 20:38:31.345686       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b] <==
	I1120 20:38:55.103741       1 serving.go:386] Generated self-signed cert in-memory
	I1120 20:38:56.746614       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 20:38:56.746746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:38:56.758093       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 20:38:56.758871       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 20:38:56.758924       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 20:38:56.758964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 20:38:56.760186       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.760330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.760231       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 20:38:56.766036       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 20:38:56.859179       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 20:38:56.861243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.871189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:44:30 functional-933412 kubelet[6715]: E1120 20:44:30.773994    6715 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 20 20:44:30 functional-933412 kubelet[6715]: E1120 20:44:30.774268    6715 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-w4799_kubernetes-dashboard(753b6014-a9c2-4e38-9016-1adac90b4a77): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:44:30 functional-933412 kubelet[6715]: E1120 20:44:30.774301    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-w4799" podUID="753b6014-a9c2-4e38-9016-1adac90b4a77"
	Nov 20 20:44:33 functional-933412 kubelet[6715]: E1120 20:44:33.180311    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671473179194610  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:33 functional-933412 kubelet[6715]: E1120 20:44:33.180358    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671473179194610  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:41 functional-933412 kubelet[6715]: E1120 20:44:41.974062    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-w4799" podUID="753b6014-a9c2-4e38-9016-1adac90b4a77"
	Nov 20 20:44:43 functional-933412 kubelet[6715]: E1120 20:44:43.182312    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671483181823760  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:43 functional-933412 kubelet[6715]: E1120 20:44:43.182332    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671483181823760  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:53 functional-933412 kubelet[6715]: E1120 20:44:53.085108    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod952cefb1-c3e7-481c-bb72-d7f96fde7bd9/crio-a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1: Error finding container a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1: Status 404 returned error can't find the container with id a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1
	Nov 20 20:44:53 functional-933412 kubelet[6715]: E1120 20:44:53.085705    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod886f9cf737a5ee47db4d863c4c536829/crio-37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95: Error finding container 37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95: Status 404 returned error can't find the container with id 37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95
	Nov 20 20:44:53 functional-933412 kubelet[6715]: E1120 20:44:53.184607    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671493183919249  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:53 functional-933412 kubelet[6715]: E1120 20:44:53.184706    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671493183919249  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:00 functional-933412 kubelet[6715]: E1120 20:45:00.883051    6715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 20 20:45:00 functional-933412 kubelet[6715]: E1120 20:45:00.883113    6715 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 20 20:45:00 functional-933412 kubelet[6715]: E1120 20:45:00.883413    6715 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-97v7k_kubernetes-dashboard(4811a1cd-b896-49a4-8130-01de71cc2b82): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:45:00 functional-933412 kubelet[6715]: E1120 20:45:00.883454    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:45:03 functional-933412 kubelet[6715]: E1120 20:45:03.188404    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671503186562933  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:03 functional-933412 kubelet[6715]: E1120 20:45:03.188425    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671503186562933  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:13 functional-933412 kubelet[6715]: E1120 20:45:13.190275    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671513189556218  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:13 functional-933412 kubelet[6715]: E1120 20:45:13.190321    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671513189556218  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:14 functional-933412 kubelet[6715]: E1120 20:45:14.978219    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:45:23 functional-933412 kubelet[6715]: E1120 20:45:23.192604    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671523192273175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:23 functional-933412 kubelet[6715]: E1120 20:45:23.192626    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671523192273175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:33 functional-933412 kubelet[6715]: E1120 20:45:33.195746    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671533194401230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	Nov 20 20:45:33 functional-933412 kubelet[6715]: E1120 20:45:33.195830    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671533194401230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	
	
	==> storage-provisioner [01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b] <==
	I1120 20:38:34.406720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 20:38:44.409322       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: TLS handshake timeout
	
	
	==> storage-provisioner [673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a] <==
	W1120 20:45:08.782745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:10.786742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:10.792360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:12.796493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:12.805751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:14.809890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:14.814726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:16.818261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:16.827630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:18.831370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:18.837803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:20.840782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:20.846454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:22.850346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:22.860884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:24.864971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:24.869505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:26.873183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:26.882183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:28.887786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:28.902862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:30.907074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:30.915276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:32.920425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:32.929743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-933412 -n functional-933412
helpers_test.go:269: (dbg) Run:  kubectl --context functional-933412 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799: exit status 1 (110.333895ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 20 Nov 2025 20:40:25 +0000
	      Finished:     Thu, 20 Nov 2025 20:40:26 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62gmv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-62gmv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m11s  default-scheduler  Successfully assigned default/busybox-mount to functional-933412
	  Normal  Pulling    6m11s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.495s (1m2.514s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m9s   kubelet            Created container: mount-munger
	  Normal  Started    5m8s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-2dthj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qql4l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qql4l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason       Age                    From               Message
	  ----     ------       ----                   ----               -------
	  Normal   Scheduled    6m13s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dthj to functional-933412
	  Warning  FailedMount  6m12s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-qql4l" : failed to sync configmap cache: timed out waiting for the condition
	  Warning  Failed       2m36s (x2 over 5m11s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       2m36s (x2 over 5m11s)  kubelet            Error: ErrImagePull
	  Normal   BackOff      2m21s (x2 over 5m11s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed       2m21s (x2 over 5m11s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling      2m8s (x3 over 6m11s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ppbrm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm5wr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tm5wr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason       Age                  From               Message
	  ----     ------       ----                 ----               -------
	  Normal   Scheduled    6m13s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppbrm to functional-933412
	  Warning  FailedMount  6m12s                kubelet            MountVolume.SetUp failed for volume "kube-api-access-tm5wr" : failed to sync configmap cache: timed out waiting for the condition
	  Warning  Failed       5m41s                kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       94s (x3 over 5m41s)  kubelet            Error: ErrImagePull
	  Warning  Failed       94s (x2 over 4m6s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff      66s (x4 over 5m41s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed       66s (x4 over 5m41s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling      51s (x4 over 6m11s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-czlbc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-czlbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m6s                  default-scheduler  Successfully assigned default/sp-pod to functional-933412
	  Warning  Failed     2m4s (x2 over 4m36s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m4s (x2 over 4m36s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    114s (x2 over 4m36s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     114s (x2 over 4m36s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    102s (x3 over 6m6s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-97v7k" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-w4799" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-933412 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-933412 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-ppbrm" [b5061d2e-b0e6-491a-8b57-c22e7f8adc92] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-ppbrm" [b5061d2e-b0e6-491a-8b57-c22e7f8adc92] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-933412 -n functional-933412
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-20 20:49:21.177368629 +0000 UTC m=+1710.063657393
functional_test.go:1645: (dbg) Run:  kubectl --context functional-933412 describe po hello-node-connect-7d85dfc575-ppbrm -n default
functional_test.go:1645: (dbg) kubectl --context functional-933412 describe po hello-node-connect-7d85dfc575-ppbrm -n default:
Name:             hello-node-connect-7d85dfc575-ppbrm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-933412/192.168.39.212
Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm5wr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tm5wr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                    From               Message
----     ------       ----                   ----               -------
Normal   Scheduled    10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppbrm to functional-933412
Warning  FailedMount  9m59s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-tm5wr" : failed to sync configmap cache: timed out waiting for the condition
Warning  Failed       9m28s                  kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed       2m20s (x4 over 9m28s)  kubelet            Error: ErrImagePull
Warning  Failed       2m20s (x3 over 7m53s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff      68s (x10 over 9m28s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed       68s (x10 over 9m28s)   kubelet            Error: ImagePullBackOff
Normal   Pulling      54s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-933412 logs hello-node-connect-7d85dfc575-ppbrm -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-933412 logs hello-node-connect-7d85dfc575-ppbrm -n default: exit status 1 (71.672991ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ppbrm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-933412 logs hello-node-connect-7d85dfc575-ppbrm -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-933412 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-ppbrm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-933412/192.168.39.212
Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm5wr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tm5wr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                    From               Message
----     ------       ----                   ----               -------
Normal   Scheduled    10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppbrm to functional-933412
Warning  FailedMount  9m59s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-tm5wr" : failed to sync configmap cache: timed out waiting for the condition
Warning  Failed       9m28s                  kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed       2m20s (x4 over 9m28s)  kubelet            Error: ErrImagePull
Warning  Failed       2m20s (x3 over 7m53s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff      68s (x10 over 9m28s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed       68s (x10 over 9m28s)   kubelet            Error: ImagePullBackOff
Normal   Pulling      54s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-933412 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-933412 logs -l app=hello-node-connect: exit status 1 (69.632232ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ppbrm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-933412 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-933412 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.37.109
IPs:                      10.107.37.109
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30710/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-933412 -n functional-933412
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 logs -n 25: (1.474693373s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-933412 ssh sudo cat /etc/ssl/certs/77062.pem                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh sudo cat /usr/share/ca-certificates/77062.pem                                                        │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ cp             │ functional-933412 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh -n functional-933412 sudo cat /home/docker/cp-test.txt                                               │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls                                                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ cp             │ functional-933412 cp functional-933412:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2552378709/001/cp-test.txt │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image save --daemon kicbase/echo-server:functional-933412 --alsologtostderr                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh -n functional-933412 sudo cat /home/docker/cp-test.txt                                               │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ cp             │ functional-933412 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh -n functional-933412 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh echo hello                                                                                           │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh cat /etc/hostname                                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ update-context │ functional-933412 update-context --alsologtostderr -v=2                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ update-context │ functional-933412 update-context --alsologtostderr -v=2                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ update-context │ functional-933412 update-context --alsologtostderr -v=2                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls --format short --alsologtostderr                                                                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls --format yaml --alsologtostderr                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh pgrep buildkitd                                                                                      │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │                     │
	│ image          │ functional-933412 image build -t localhost/my-image:functional-933412 testdata/build --alsologtostderr                     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls                                                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls --format json --alsologtostderr                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls --format table --alsologtostderr                                                                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ service        │ functional-933412 service list                                                                                             │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:49 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:40:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:40:31.985595   18511 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:40:31.985883   18511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.985893   18511 out.go:374] Setting ErrFile to fd 2...
	I1120 20:40:31.985900   18511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.986117   18511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:40:31.986549   18511 out.go:368] Setting JSON to false
	I1120 20:40:31.987404   18511 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1382,"bootTime":1763669850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:40:31.987455   18511 start.go:143] virtualization: kvm guest
	I1120 20:40:31.989149   18511 out.go:179] * [functional-933412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:40:31.990360   18511 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:40:31.990348   18511 notify.go:221] Checking for updates...
	I1120 20:40:31.992108   18511 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:40:31.993326   18511 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:40:31.994559   18511 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:40:31.999027   18511 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:40:32.000067   18511 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:40:32.001461   18511 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:40:32.001869   18511 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:40:32.032149   18511 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 20:40:32.033200   18511 start.go:309] selected driver: kvm2
	I1120 20:40:32.033212   18511 start.go:930] validating driver "kvm2" against &{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:32.033322   18511 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:40:32.034235   18511 cni.go:84] Creating CNI manager for ""
	I1120 20:40:32.034287   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:40:32.034333   18511 start.go:353] cluster config:
	{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:32.035594   18511 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.196608145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671762196585437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a69c957f-6f32-41ad-9fd9-99b93a051267 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.197714097Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4ff467f-34f0-4702-bf03-50608b5f7cbf name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.198023826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4ff467f-34f0-4702-bf03-50608b5f7cbf name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.198438770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4ff467f-34f0-4702-bf03-50608b5f7cbf name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.251188246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5446739-6b1e-4a19-9460-43b9b98a21a4 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.251383656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5446739-6b1e-4a19-9460-43b9b98a21a4 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.253610929Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f06eb593-1d75-4492-9b14-d428ed250607 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.254357718Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671762254332076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f06eb593-1d75-4492-9b14-d428ed250607 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.255345802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f282b688-9be8-4be7-bf4f-40cb799e73e7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.255415574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f282b688-9be8-4be7-bf4f-40cb799e73e7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.255740917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f282b688-9be8-4be7-bf4f-40cb799e73e7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.287227439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33a55597-edc4-4d41-8c4a-b84aeb095ad4 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.287317366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33a55597-edc4-4d41-8c4a-b84aeb095ad4 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.289328460Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a950371-a8e1-48fb-b0ad-27e7bdf25b62 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.290143033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671762290115285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a950371-a8e1-48fb-b0ad-27e7bdf25b62 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.291082070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3ecf0d7-00e1-433e-a26f-9f83f079f554 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.291247745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3ecf0d7-00e1-433e-a26f-9f83f079f554 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.291957795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3ecf0d7-00e1-433e-a26f-9f83f079f554 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.324531970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0db3d3a-ad30-4784-8eee-aabc6f5686de name=/runtime.v1.RuntimeService/Version
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.324604625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0db3d3a-ad30-4784-8eee-aabc6f5686de name=/runtime.v1.RuntimeService/Version
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.325951175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89332b97-a78a-43a1-9ef0-9fdc36927311 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.326553544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671762326531357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89332b97-a78a-43a1-9ef0-9fdc36927311 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.328794768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c95c5d05-6c24-463e-9c24-e338c0fe0c02 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.329145007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c95c5d05-6c24-463e-9c24-e338c0fe0c02 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:49:22 functional-933412 crio[5444]: time="2025-11-20 20:49:22.329617281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c95c5d05-6c24-463e-9c24-e338c0fe0c02 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	109d4bb80eac7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 minutes ago       Exited              mount-munger              0                   9d5410f166b27       busybox-mount                               default
	2742e68c74423       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Running             kube-proxy                3                   1f6a7ab0c91ba       kube-proxy-6xnj6                            kube-system
	673e5b087a6e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   f1635aeef3e54       storage-provisioner                         kube-system
	564bc1707bc93       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   e94448a6ba4f6       kube-apiserver-functional-933412            kube-system
	90434f3698428       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   3                   7026443238fe7       kube-controller-manager-functional-933412   kube-system
	22f1327d1dafb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            3                   512c473573a1e       kube-scheduler-functional-933412            kube-system
	52d08d12cb18e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      3                   d61277493dd5b       etcd-functional-933412                      kube-system
	01fbf4a1da609       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       2                   f1635aeef3e54       storage-provisioner                         kube-system
	71d72227e095d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   abafba1840584       coredns-66bc5c9577-2b7p9                    kube-system
	e2153be7a1118       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Exited              etcd                      2                   d61277493dd5b       etcd-functional-933412                      kube-system
	ac98f1d3b4d98       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Exited              kube-controller-manager   2                   7026443238fe7       kube-controller-manager-functional-933412   kube-system
	06fbf273e2f27       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Exited              kube-scheduler            2                   512c473573a1e       kube-scheduler-functional-933412            kube-system
	2a0c4fbb9b5d7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Exited              kube-proxy                2                   1f6a7ab0c91ba       kube-proxy-6xnj6                            kube-system
	2f33930aaa277       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   a27089054680f       coredns-66bc5c9577-2b7p9                    kube-system
	
	
	==> coredns [2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40383 - 36748 "HINFO IN 5820690942743418349.4099311619110396990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022726285s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0] <==
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45883 - 50568 "HINFO IN 5877995768124618261.4169147699699731941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.421086977s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41888->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41918->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41904->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-933412
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-933412
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=functional-933412
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-933412
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:49:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:45:55 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:45:55 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:45:55 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:45:55 +0000   Thu, 20 Nov 2025 20:37:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    functional-933412
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 acb893993e724fd68d51aa75dbb6007a
	  System UUID:                acb89399-3e72-4fd6-8d51-aa75dbb6007a
	  Boot ID:                    8a667e28-1db3-4eb5-acb8-0cecc80439c5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2dthj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-ppbrm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-77r7s                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    3m46s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-2b7p9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-933412                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-933412              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-933412     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-6xnj6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-933412              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-97v7k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-w4799         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-933412 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	
	
	==> dmesg <==
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081853] kauditd_printk_skb: 1 callbacks suppressed
	[Nov20 20:37] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.149691] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.660350] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.929408] kauditd_printk_skb: 249 callbacks suppressed
	[  +0.111579] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.014171] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.566090] kauditd_printk_skb: 176 callbacks suppressed
	[Nov20 20:38] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.114496] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.272570] kauditd_printk_skb: 182 callbacks suppressed
	[  +2.672253] kauditd_printk_skb: 229 callbacks suppressed
	[  +6.943960] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.127034] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.126294] kauditd_printk_skb: 121 callbacks suppressed
	[Nov20 20:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.055294] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.001227] kauditd_printk_skb: 104 callbacks suppressed
	[ +25.129996] kauditd_printk_skb: 26 callbacks suppressed
	[Nov20 20:40] kauditd_printk_skb: 29 callbacks suppressed
	[Nov20 20:41] kauditd_printk_skb: 68 callbacks suppressed
	[Nov20 20:45] crun[10216]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[Nov20 20:49] kauditd_printk_skb: 42 callbacks suppressed
	
	
	==> etcd [52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4] <==
	{"level":"warn","ts":"2025-11-20T20:38:55.537892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.556105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.570738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.578629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.584498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.598770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.615999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.627387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.635890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.644879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.658598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.671138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.678504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.686422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.693713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.703372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.716480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.726224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.743373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.760117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.770191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.861274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46834","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T20:48:55.064000Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1045}
	{"level":"info","ts":"2025-11-20T20:48:55.095821Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1045,"took":"31.410952ms","hash":2931667262,"current-db-size-bytes":3395584,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1482752,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-20T20:48:55.095865Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2931667262,"revision":1045,"compact-revision":-1}
	
	
	==> etcd [e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952] <==
	{"level":"info","ts":"2025-11-20T20:38:29.355734Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","recovered-remote-peer-id":"eed9c28654b6490f","recovered-remote-peer-urls":["https://192.168.39.212:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:38:29.356742Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.356755Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-20T20:38:29.356796Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-20T20:38:29.356888Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"eed9c28654b6490f switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-20T20:38:29.356945Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"eed9c28654b6490f became follower at term 3"}
	{"level":"info","ts":"2025-11-20T20:38:29.356975Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft eed9c28654b6490f [peers: [], term: 3, commit: 552, applied: 0, lastindex: 552, lastterm: 3]"}
	{"level":"warn","ts":"2025-11-20T20:38:29.364020Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-20T20:38:29.391830Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":511}
	{"level":"info","ts":"2025-11-20T20:38:29.401745Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-20T20:38:29.403842Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"eed9c28654b6490f","timeout":"7s"}
	{"level":"info","ts":"2025-11-20T20:38:29.405934Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"eed9c28654b6490f"}
	{"level":"info","ts":"2025-11-20T20:38:29.406004Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"eed9c28654b6490f","local-server-version":"3.6.4","cluster-id":"f8d3b95e5bbb719c","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.406949Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"eed9c28654b6490f","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407041Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407109Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407119Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407311Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"eed9c28654b6490f switched to configuration voters=(17211001333175699727)"}
	{"level":"info","ts":"2025-11-20T20:38:29.407396Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","added-peer-id":"eed9c28654b6490f","added-peer-peer-urls":["https://192.168.39.212:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:38:29.407490Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.414334Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T20:38:29.418300Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"eed9c28654b6490f","initial-advertise-peer-urls":["https://192.168.39.212:2380"],"listen-peer-urls":["https://192.168.39.212:2380"],"advertise-client-urls":["https://192.168.39.212:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.212:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T20:38:29.418370Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T20:38:29.418498Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2025-11-20T20:38:29.418530Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.212:2380"}
	
	
	==> kernel <==
	 20:49:22 up 12 min,  0 users,  load average: 0.05, 0.25, 0.27
	Linux functional-933412 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a] <==
	I1120 20:38:56.715837       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 20:38:56.717113       1 aggregator.go:171] initial CRD sync complete...
	I1120 20:38:56.717416       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 20:38:56.717505       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 20:38:56.717522       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:38:56.725574       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:38:56.725821       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1120 20:38:56.748285       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 20:38:57.026303       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:38:57.489472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:38:58.325699       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:38:58.364821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:38:58.390567       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:38:58.397107       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:39:00.061709       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:39:00.112371       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:39:00.362096       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:39:16.327502       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.145.237"}
	I1120 20:39:20.910863       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.37.109"}
	I1120 20:39:20.942215       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.60.10"}
	I1120 20:40:32.961131       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:40:33.299327       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.87.218"}
	I1120 20:40:33.323787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.190.79"}
	I1120 20:45:36.774747       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.175.232"}
	I1120 20:48:56.632121       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17] <==
	I1120 20:38:59.997635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:39:00.006780       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 20:39:00.006997       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 20:39:00.007087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:39:00.007130       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:39:00.007148       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:39:00.007149       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:39:00.008479       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:39:00.009407       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 20:39:00.012086       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:39:00.012538       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:39:00.019115       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 20:39:00.032896       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 20:39:00.039254       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 20:39:00.041533       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:39:00.042922       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:39:00.048280       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 20:39:00.055113       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1120 20:40:33.053616       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.093742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.097347       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.109865       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.113801       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.127931       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.129241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cdc6e2759d64e3b6d] <==
	
	
	==> kube-proxy [2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262] <==
	I1120 20:38:57.537634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:38:57.639095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:38:57.639246       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.212"]
	E1120 20:38:57.639542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:38:57.744439       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:38:57.744545       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:38:57.744583       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:38:57.758377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:38:57.760174       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:38:57.760235       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:38:57.767043       1 config.go:200] "Starting service config controller"
	I1120 20:38:57.767096       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:38:57.767126       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:38:57.767140       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:38:57.767160       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:38:57.767173       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:38:57.767955       1 config.go:309] "Starting node config controller"
	I1120 20:38:57.767998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:38:57.768014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:38:57.867737       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:38:57.867768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:38:57.867742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae] <==
	I1120 20:38:28.682607       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:38:28.797288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1120 20:38:28.818903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-933412&limit=500&resourceVersion=0\": dial tcp 192.168.39.212:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:38:40.372759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-933412&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99] <==
	I1120 20:38:31.345686       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b] <==
	I1120 20:38:55.103741       1 serving.go:386] Generated self-signed cert in-memory
	I1120 20:38:56.746614       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 20:38:56.746746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:38:56.758093       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 20:38:56.758871       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 20:38:56.758924       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 20:38:56.758964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 20:38:56.760186       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.760330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.760231       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 20:38:56.766036       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 20:38:56.859179       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 20:38:56.861243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.871189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:48:23 functional-933412 kubelet[6715]: E1120 20:48:23.239027    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671703237737228  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:48:31 functional-933412 kubelet[6715]: E1120 20:48:31.567865    6715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 20 20:48:31 functional-933412 kubelet[6715]: E1120 20:48:31.567992    6715 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 20 20:48:31 functional-933412 kubelet[6715]: E1120 20:48:31.568263    6715 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-97v7k_kubernetes-dashboard(4811a1cd-b896-49a4-8130-01de71cc2b82): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:48:31 functional-933412 kubelet[6715]: E1120 20:48:31.568314    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:48:33 functional-933412 kubelet[6715]: E1120 20:48:33.242305    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671713242035677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:48:33 functional-933412 kubelet[6715]: E1120 20:48:33.242324    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671713242035677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:48:43 functional-933412 kubelet[6715]: E1120 20:48:43.245449    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671723244487510  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:48:43 functional-933412 kubelet[6715]: E1120 20:48:43.245523    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671723244487510  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:48:44 functional-933412 kubelet[6715]: E1120 20:48:44.975196    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:48:53 functional-933412 kubelet[6715]: E1120 20:48:53.085820    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod952cefb1-c3e7-481c-bb72-d7f96fde7bd9/crio-a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1: Error finding container a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1: Status 404 returned error can't find the container with id a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1
	Nov 20 20:48:53 functional-933412 kubelet[6715]: E1120 20:48:53.086524    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod886f9cf737a5ee47db4d863c4c536829/crio-37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95: Error finding container 37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95: Status 404 returned error can't find the container with id 37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95
	Nov 20 20:48:53 functional-933412 kubelet[6715]: E1120 20:48:53.248532    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671733247304905  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:48:53 functional-933412 kubelet[6715]: E1120 20:48:53.248561    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671733247304905  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:48:55 functional-933412 kubelet[6715]: E1120 20:48:55.973627    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:49:01 functional-933412 kubelet[6715]: E1120 20:49:01.672619    6715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Nov 20 20:49:01 functional-933412 kubelet[6715]: E1120 20:49:01.672990    6715 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Nov 20 20:49:01 functional-933412 kubelet[6715]: E1120 20:49:01.673170    6715 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-77r7s_default(4e3a647a-ad51-4455-93d6-b5ad363385d8): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:49:01 functional-933412 kubelet[6715]: E1120 20:49:01.673203    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-77r7s" podUID="4e3a647a-ad51-4455-93d6-b5ad363385d8"
	Nov 20 20:49:01 functional-933412 kubelet[6715]: E1120 20:49:01.807749    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-77r7s" podUID="4e3a647a-ad51-4455-93d6-b5ad363385d8"
	Nov 20 20:49:03 functional-933412 kubelet[6715]: E1120 20:49:03.251258    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671743250759791  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:49:03 functional-933412 kubelet[6715]: E1120 20:49:03.251278    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671743250759791  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:49:07 functional-933412 kubelet[6715]: E1120 20:49:07.974181    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:49:13 functional-933412 kubelet[6715]: E1120 20:49:13.253447    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671753252768156  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:49:13 functional-933412 kubelet[6715]: E1120 20:49:13.253467    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671753252768156  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	
	
	==> storage-provisioner [01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b] <==
	I1120 20:38:34.406720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 20:38:44.409322       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: TLS handshake timeout
	
	
	==> storage-provisioner [673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a] <==
	W1120 20:48:58.043391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:00.046525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:00.052737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:02.057264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:02.062070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:04.065952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:04.071220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:06.074636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:06.082872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:08.087363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:08.096003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:10.100052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:10.109057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:12.113155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:12.117637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:14.120866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:14.126577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:16.130611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:16.138249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:18.142283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:18.148085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:20.151433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:20.157142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:22.163150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:49:22.176540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-933412 -n functional-933412
helpers_test.go:269: (dbg) Run:  kubectl --context functional-933412 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm mysql-5bb876957f-77r7s sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm mysql-5bb876957f-77r7s sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm mysql-5bb876957f-77r7s sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799: exit status 1 (101.741353ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 20 Nov 2025 20:40:25 +0000
	      Finished:     Thu, 20 Nov 2025 20:40:26 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62gmv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-62gmv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-933412
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8m58s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.495s (1m2.514s including waiting). Image size: 4631262 bytes.
	  Normal  Created    8m58s  kubelet            Created container: mount-munger
	  Normal  Started    8m57s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-2dthj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qql4l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qql4l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason       Age                  From               Message
	  ----     ------       ----                 ----               -------
	  Normal   Scheduled    10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dthj to functional-933412
	  Warning  FailedMount  10m                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-qql4l" : failed to sync configmap cache: timed out waiting for the condition
	  Warning  Failed       6m25s (x2 over 9m)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       3m22s (x3 over 9m)   kubelet            Error: ErrImagePull
	  Warning  Failed       3m22s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff      2m48s (x5 over 9m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed       2m48s (x5 over 9m)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling      2m33s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ppbrm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm5wr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tm5wr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason       Age                    From               Message
	  ----     ------       ----                   ----               -------
	  Normal   Scheduled    10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppbrm to functional-933412
	  Warning  FailedMount  10m                    kubelet            MountVolume.SetUp failed for volume "kube-api-access-tm5wr" : failed to sync configmap cache: timed out waiting for the condition
	  Warning  Failed       9m30s                  kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       2m22s (x4 over 9m30s)  kubelet            Error: ErrImagePull
	  Warning  Failed       2m22s (x3 over 7m55s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff      70s (x10 over 9m30s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed       70s (x10 over 9m30s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling      56s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-77r7s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:45:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lbr6v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lbr6v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m46s               default-scheduler  Successfully assigned default/mysql-5bb876957f-77r7s to functional-933412
	  Warning  Failed     22s                 kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     22s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    22s                 kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     22s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    9s (x2 over 3m46s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-czlbc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-czlbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m55s                  default-scheduler  Successfully assigned default/sp-pod to functional-933412
	  Warning  Failed     5m53s (x2 over 8m25s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m52s (x3 over 8m25s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m52s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m15s (x5 over 8m25s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m15s (x5 over 8m25s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m2s (x4 over 9m55s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-97v7k" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-w4799" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm mysql-5bb876957f-77r7s sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.95s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [903793d0-3eb5-4f21-a0c6-580ef4002705] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003713611s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-933412 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-933412 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-933412 get pvc myclaim -o=json
I1120 20:39:26.565838    7706 retry.go:31] will retry after 1.230671885s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:29478631-785b-42ac-8121-0dfa41ca4cc1 ResourceVersion:692 Generation:0 CreationTimestamp:2025-11-20 20:39:26 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-29478631-785b-42ac-8121-0dfa41ca4cc1 StorageClassName:0xc001786b60 VolumeMode:0xc001786b70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-933412 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-933412 apply -f testdata/storage-provisioner/pod.yaml
I1120 20:39:27.988078    7706 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3c972cf3-8435-4a39-8c33-cc134f096e49] Pending
helpers_test.go:352: "sp-pod" [3c972cf3-8435-4a39-8c33-cc134f096e49] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1120 20:39:36.327285    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:36.333711    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:36.345037    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:36.366433    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:36.407897    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:36.489397    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:36.650937    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:36.973002    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:37.615310    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:38.897624    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:41.459787    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:46.581730    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:39:56.823928    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:40:17.306200    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-933412 -n functional-933412
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-20 20:45:28.221771347 +0000 UTC m=+1477.108060108
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-933412 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-933412 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-933412/192.168.39.212
Start Time:       Thu, 20 Nov 2025 20:39:27 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-czlbc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-czlbc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/sp-pod to functional-933412
Warning  Failed     118s (x2 over 4m30s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     118s (x2 over 4m30s)  kubelet            Error: ErrImagePull
Normal   BackOff    108s (x2 over 4m30s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     108s (x2 over 4m30s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    96s (x3 over 6m)      kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-933412 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-933412 logs sp-pod -n default: exit status 1 (72.684683ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-933412 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-933412 -n functional-933412
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 logs -n 25: (1.409980524s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-933412 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:39 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdany-port887797097/001:/mount-9p --alsologtostderr -v=1                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:39 UTC │                     │
	│ ssh       │ functional-933412 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:39 UTC │ 20 Nov 25 20:39 UTC │
	│ ssh       │ functional-933412 ssh -- ls -la /mount-9p                                                                                         │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:39 UTC │ 20 Nov 25 20:39 UTC │
	│ ssh       │ functional-933412 ssh cat /mount-9p/test-1763671161701273709                                                                      │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:39 UTC │ 20 Nov 25 20:39 UTC │
	│ ssh       │ functional-933412 ssh stat /mount-9p/created-by-test                                                                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh stat /mount-9p/created-by-pod                                                                               │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh sudo umount -f /mount-9p                                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdspecific-port3575881837/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ ssh       │ functional-933412 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh -- ls -la /mount-9p                                                                                         │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh sudo umount -f /mount-9p                                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ ssh       │ functional-933412 ssh findmnt -T /mount1                                                                                          │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount1 --alsologtostderr -v=1                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount2 --alsologtostderr -v=1                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ mount     │ -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount3 --alsologtostderr -v=1                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ ssh       │ functional-933412 ssh findmnt -T /mount1                                                                                          │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh findmnt -T /mount2                                                                                          │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ ssh       │ functional-933412 ssh findmnt -T /mount3                                                                                          │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │ 20 Nov 25 20:40 UTC │
	│ mount     │ -p functional-933412 --kill=true                                                                                                  │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ start     │ -p functional-933412 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ start     │ -p functional-933412 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ start     │ -p functional-933412 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-933412 --alsologtostderr -v=1                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:40 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:40:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:40:31.985595   18511 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:40:31.985883   18511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.985893   18511 out.go:374] Setting ErrFile to fd 2...
	I1120 20:40:31.985900   18511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.986117   18511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:40:31.986549   18511 out.go:368] Setting JSON to false
	I1120 20:40:31.987404   18511 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1382,"bootTime":1763669850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:40:31.987455   18511 start.go:143] virtualization: kvm guest
	I1120 20:40:31.989149   18511 out.go:179] * [functional-933412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:40:31.990360   18511 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:40:31.990348   18511 notify.go:221] Checking for updates...
	I1120 20:40:31.992108   18511 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:40:31.993326   18511 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:40:31.994559   18511 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:40:31.999027   18511 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:40:32.000067   18511 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:40:32.001461   18511 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:40:32.001869   18511 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:40:32.032149   18511 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 20:40:32.033200   18511 start.go:309] selected driver: kvm2
	I1120 20:40:32.033212   18511 start.go:930] validating driver "kvm2" against &{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:32.033322   18511 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:40:32.034235   18511 cni.go:84] Creating CNI manager for ""
	I1120 20:40:32.034287   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:40:32.034333   18511 start.go:353] cluster config:
	{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:32.035594   18511 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.019805843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671529019783051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79fd688e-35d1-403a-b0d5-af8b52e84eca name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.021150145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07c60e91-3a8d-46c9-a9e6-6bdb12d981d7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.021228880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07c60e91-3a8d-46c9-a9e6-6bdb12d981d7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.021486645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07c60e91-3a8d-46c9-a9e6-6bdb12d981d7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.065169865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c10ad6a2-563f-4d7a-a97c-fffa65c75ed0 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.065282863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c10ad6a2-563f-4d7a-a97c-fffa65c75ed0 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.068014417Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5fa8ef0-82e4-478c-b29a-6e8a3668dd99 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.068581277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671529068557861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5fa8ef0-82e4-478c-b29a-6e8a3668dd99 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.070143141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=812f6079-681b-457a-8a1a-ee391b2663ba name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.070425781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=812f6079-681b-457a-8a1a-ee391b2663ba name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.070973306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=812f6079-681b-457a-8a1a-ee391b2663ba name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.104476399Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8002f3c1-f5c1-4595-906a-826fa5319376 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.104552471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8002f3c1-f5c1-4595-906a-826fa5319376 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.105976749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24c11c9c-0ba3-47b5-abcf-d8c6c09c8337 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.106470832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671529106449232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24c11c9c-0ba3-47b5-abcf-d8c6c09c8337 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.107312057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d1f3aef-db80-4f1c-9b6a-7b5b89d19c52 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.107553989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d1f3aef-db80-4f1c-9b6a-7b5b89d19c52 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.108365037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d1f3aef-db80-4f1c-9b6a-7b5b89d19c52 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.141819906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bbc998f-5f19-4070-8091-51c610b9b020 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.141924484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bbc998f-5f19-4070-8091-51c610b9b020 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.143304654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53a16afb-ce06-432d-bfdd-08350ec85bd5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.143848705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763671529143827020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53a16afb-ce06-432d-bfdd-08350ec85bd5 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.144851562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efec1d45-566c-48da-9ca0-69beb8d92415 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.145081737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efec1d45-566c-48da-9ca0-69beb8d92415 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:45:29 functional-933412 crio[5444]: time="2025-11-20 20:45:29.145400360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efec1d45-566c-48da-9ca0-69beb8d92415 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	109d4bb80eac7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   9d5410f166b27       busybox-mount                               default
	2742e68c74423       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                3                   1f6a7ab0c91ba       kube-proxy-6xnj6                            kube-system
	673e5b087a6e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       3                   f1635aeef3e54       storage-provisioner                         kube-system
	564bc1707bc93       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   e94448a6ba4f6       kube-apiserver-functional-933412            kube-system
	90434f3698428       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   3                   7026443238fe7       kube-controller-manager-functional-933412   kube-system
	22f1327d1dafb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            3                   512c473573a1e       kube-scheduler-functional-933412            kube-system
	52d08d12cb18e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      3                   d61277493dd5b       etcd-functional-933412                      kube-system
	01fbf4a1da609       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       2                   f1635aeef3e54       storage-provisioner                         kube-system
	71d72227e095d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   2                   abafba1840584       coredns-66bc5c9577-2b7p9                    kube-system
	e2153be7a1118       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      2                   d61277493dd5b       etcd-functional-933412                      kube-system
	ac98f1d3b4d98       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      7 minutes ago       Exited              kube-controller-manager   2                   7026443238fe7       kube-controller-manager-functional-933412   kube-system
	06fbf273e2f27       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            2                   512c473573a1e       kube-scheduler-functional-933412            kube-system
	2a0c4fbb9b5d7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                2                   1f6a7ab0c91ba       kube-proxy-6xnj6                            kube-system
	2f33930aaa277       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   1                   a27089054680f       coredns-66bc5c9577-2b7p9                    kube-system
	
	
	==> coredns [2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40383 - 36748 "HINFO IN 5820690942743418349.4099311619110396990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022726285s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0] <==
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45883 - 50568 "HINFO IN 5877995768124618261.4169147699699731941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.421086977s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41888->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41918->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41904->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-933412
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-933412
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=functional-933412
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-933412
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:45:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:40:59 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:40:59 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:40:59 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:40:59 +0000   Thu, 20 Nov 2025 20:37:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    functional-933412
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 acb893993e724fd68d51aa75dbb6007a
	  System UUID:                acb89399-3e72-4fd6-8d51-aa75dbb6007a
	  Boot ID:                    8a667e28-1db3-4eb5-acb8-0cecc80439c5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2dthj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     hello-node-connect-7d85dfc575-ppbrm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-2b7p9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m10s
	  kube-system                 etcd-functional-933412                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m17s
	  kube-system                 kube-apiserver-functional-933412              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-933412     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-proxy-6xnj6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-scheduler-functional-933412              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-97v7k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-w4799         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m9s                   kube-proxy       
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  Starting                 7m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m16s                  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m16s                  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s                  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m16s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m15s                  kubelet          Node functional-933412 status is now: NodeReady
	  Normal  RegisteredNode           8m11s                  node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	  Normal  NodeHasNoDiskPressure    7m42s (x8 over 7m42s)  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m42s (x8 over 7m42s)  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m42s (x7 over 7m42s)  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m35s                  node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m36s (x8 over 6m36s)  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x8 over 6m36s)  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x7 over 6m36s)  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m30s                  node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	
	
	==> dmesg <==
	[  +0.002100] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.203979] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081853] kauditd_printk_skb: 1 callbacks suppressed
	[Nov20 20:37] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.149691] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.660350] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.929408] kauditd_printk_skb: 249 callbacks suppressed
	[  +0.111579] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.014171] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.566090] kauditd_printk_skb: 176 callbacks suppressed
	[Nov20 20:38] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.114496] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.272570] kauditd_printk_skb: 182 callbacks suppressed
	[  +2.672253] kauditd_printk_skb: 229 callbacks suppressed
	[  +6.943960] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.127034] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.126294] kauditd_printk_skb: 121 callbacks suppressed
	[Nov20 20:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.055294] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.001227] kauditd_printk_skb: 104 callbacks suppressed
	[ +25.129996] kauditd_printk_skb: 26 callbacks suppressed
	[Nov20 20:40] kauditd_printk_skb: 29 callbacks suppressed
	[Nov20 20:41] kauditd_printk_skb: 68 callbacks suppressed
	
	
	==> etcd [52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4] <==
	{"level":"warn","ts":"2025-11-20T20:38:55.504876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.517219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.522964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.537892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.556105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.570738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.578629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.584498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.598770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.615999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.627387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.635890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.644879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.658598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.671138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.678504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.686422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.693713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.703372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.716480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.726224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.743373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.760117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.770191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.861274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46834","server-name":"","error":"EOF"}
	
	
	==> etcd [e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952] <==
	{"level":"info","ts":"2025-11-20T20:38:29.355734Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","recovered-remote-peer-id":"eed9c28654b6490f","recovered-remote-peer-urls":["https://192.168.39.212:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:38:29.356742Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.356755Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-20T20:38:29.356796Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-20T20:38:29.356888Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"eed9c28654b6490f switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-20T20:38:29.356945Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"eed9c28654b6490f became follower at term 3"}
	{"level":"info","ts":"2025-11-20T20:38:29.356975Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft eed9c28654b6490f [peers: [], term: 3, commit: 552, applied: 0, lastindex: 552, lastterm: 3]"}
	{"level":"warn","ts":"2025-11-20T20:38:29.364020Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-20T20:38:29.391830Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":511}
	{"level":"info","ts":"2025-11-20T20:38:29.401745Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-20T20:38:29.403842Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"eed9c28654b6490f","timeout":"7s"}
	{"level":"info","ts":"2025-11-20T20:38:29.405934Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"eed9c28654b6490f"}
	{"level":"info","ts":"2025-11-20T20:38:29.406004Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"eed9c28654b6490f","local-server-version":"3.6.4","cluster-id":"f8d3b95e5bbb719c","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.406949Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"eed9c28654b6490f","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407041Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407109Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407119Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407311Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"eed9c28654b6490f switched to configuration voters=(17211001333175699727)"}
	{"level":"info","ts":"2025-11-20T20:38:29.407396Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","added-peer-id":"eed9c28654b6490f","added-peer-peer-urls":["https://192.168.39.212:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:38:29.407490Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.414334Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T20:38:29.418300Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"eed9c28654b6490f","initial-advertise-peer-urls":["https://192.168.39.212:2380"],"listen-peer-urls":["https://192.168.39.212:2380"],"advertise-client-urls":["https://192.168.39.212:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.212:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T20:38:29.418370Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T20:38:29.418498Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2025-11-20T20:38:29.418530Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.212:2380"}
	
	
	==> kernel <==
	 20:45:29 up 8 min,  0 users,  load average: 0.24, 0.46, 0.33
	Linux functional-933412 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a] <==
	I1120 20:38:56.695513       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 20:38:56.697230       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 20:38:56.715837       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 20:38:56.717113       1 aggregator.go:171] initial CRD sync complete...
	I1120 20:38:56.717416       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 20:38:56.717505       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 20:38:56.717522       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:38:56.725574       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:38:56.725821       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1120 20:38:56.748285       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 20:38:57.026303       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:38:57.489472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:38:58.325699       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:38:58.364821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:38:58.390567       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:38:58.397107       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:39:00.061709       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:39:00.112371       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:39:00.362096       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:39:16.327502       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.145.237"}
	I1120 20:39:20.910863       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.37.109"}
	I1120 20:39:20.942215       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.60.10"}
	I1120 20:40:32.961131       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:40:33.299327       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.87.218"}
	I1120 20:40:33.323787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.190.79"}
	
	
	==> kube-controller-manager [90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17] <==
	I1120 20:38:59.997635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:39:00.006780       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 20:39:00.006997       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 20:39:00.007087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:39:00.007130       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:39:00.007148       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:39:00.007149       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:39:00.008479       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:39:00.009407       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 20:39:00.012086       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:39:00.012538       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:39:00.019115       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 20:39:00.032896       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 20:39:00.039254       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 20:39:00.041533       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:39:00.042922       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:39:00.048280       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 20:39:00.055113       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1120 20:40:33.053616       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.093742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.097347       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.109865       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.113801       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.127931       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.129241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cdc6e2759d64e3b6d] <==
	
	
	==> kube-proxy [2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262] <==
	I1120 20:38:57.537634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:38:57.639095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:38:57.639246       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.212"]
	E1120 20:38:57.639542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:38:57.744439       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:38:57.744545       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:38:57.744583       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:38:57.758377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:38:57.760174       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:38:57.760235       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:38:57.767043       1 config.go:200] "Starting service config controller"
	I1120 20:38:57.767096       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:38:57.767126       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:38:57.767140       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:38:57.767160       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:38:57.767173       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:38:57.767955       1 config.go:309] "Starting node config controller"
	I1120 20:38:57.767998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:38:57.768014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:38:57.867737       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:38:57.867768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:38:57.867742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae] <==
	I1120 20:38:28.682607       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:38:28.797288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1120 20:38:28.818903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-933412&limit=500&resourceVersion=0\": dial tcp 192.168.39.212:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:38:40.372759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-933412&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99] <==
	I1120 20:38:31.345686       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b] <==
	I1120 20:38:55.103741       1 serving.go:386] Generated self-signed cert in-memory
	I1120 20:38:56.746614       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 20:38:56.746746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:38:56.758093       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 20:38:56.758871       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 20:38:56.758924       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 20:38:56.758964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 20:38:56.760186       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.760330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.760231       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 20:38:56.766036       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 20:38:56.859179       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 20:38:56.861243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.871189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:44:28 functional-933412 kubelet[6715]: E1120 20:44:28.976538    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppbrm" podUID="b5061d2e-b0e6-491a-8b57-c22e7f8adc92"
	Nov 20 20:44:30 functional-933412 kubelet[6715]: E1120 20:44:30.773913    6715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 20 20:44:30 functional-933412 kubelet[6715]: E1120 20:44:30.773994    6715 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 20 20:44:30 functional-933412 kubelet[6715]: E1120 20:44:30.774268    6715 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-w4799_kubernetes-dashboard(753b6014-a9c2-4e38-9016-1adac90b4a77): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:44:30 functional-933412 kubelet[6715]: E1120 20:44:30.774301    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-w4799" podUID="753b6014-a9c2-4e38-9016-1adac90b4a77"
	Nov 20 20:44:33 functional-933412 kubelet[6715]: E1120 20:44:33.180311    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671473179194610  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:33 functional-933412 kubelet[6715]: E1120 20:44:33.180358    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671473179194610  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:41 functional-933412 kubelet[6715]: E1120 20:44:41.974062    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-w4799" podUID="753b6014-a9c2-4e38-9016-1adac90b4a77"
	Nov 20 20:44:43 functional-933412 kubelet[6715]: E1120 20:44:43.182312    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671483181823760  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:43 functional-933412 kubelet[6715]: E1120 20:44:43.182332    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671483181823760  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:53 functional-933412 kubelet[6715]: E1120 20:44:53.085108    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod952cefb1-c3e7-481c-bb72-d7f96fde7bd9/crio-a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1: Error finding container a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1: Status 404 returned error can't find the container with id a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1
	Nov 20 20:44:53 functional-933412 kubelet[6715]: E1120 20:44:53.085705    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod886f9cf737a5ee47db4d863c4c536829/crio-37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95: Error finding container 37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95: Status 404 returned error can't find the container with id 37c02a51d80153ee41446d132d79e7cbe9311161bfbb5648b7da9b76c0622e95
	Nov 20 20:44:53 functional-933412 kubelet[6715]: E1120 20:44:53.184607    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671493183919249  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:44:53 functional-933412 kubelet[6715]: E1120 20:44:53.184706    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671493183919249  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:00 functional-933412 kubelet[6715]: E1120 20:45:00.883051    6715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 20 20:45:00 functional-933412 kubelet[6715]: E1120 20:45:00.883113    6715 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 20 20:45:00 functional-933412 kubelet[6715]: E1120 20:45:00.883413    6715 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-97v7k_kubernetes-dashboard(4811a1cd-b896-49a4-8130-01de71cc2b82): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:45:00 functional-933412 kubelet[6715]: E1120 20:45:00.883454    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:45:03 functional-933412 kubelet[6715]: E1120 20:45:03.188404    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671503186562933  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:03 functional-933412 kubelet[6715]: E1120 20:45:03.188425    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671503186562933  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:13 functional-933412 kubelet[6715]: E1120 20:45:13.190275    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671513189556218  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:13 functional-933412 kubelet[6715]: E1120 20:45:13.190321    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671513189556218  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:14 functional-933412 kubelet[6715]: E1120 20:45:14.978219    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:45:23 functional-933412 kubelet[6715]: E1120 20:45:23.192604    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763671523192273175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Nov 20 20:45:23 functional-933412 kubelet[6715]: E1120 20:45:23.192626    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763671523192273175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	
	
	==> storage-provisioner [01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b] <==
	I1120 20:38:34.406720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 20:38:44.409322       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: TLS handshake timeout
	
	
	==> storage-provisioner [673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a] <==
	W1120 20:45:04.765695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:06.769327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:06.774776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:08.777632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:08.782745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:10.786742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:10.792360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:12.796493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:12.805751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:14.809890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:14.814726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:16.818261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:16.827630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:18.831370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:18.837803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:20.840782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:20.846454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:22.850346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:22.860884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:24.864971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:24.869505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:26.873183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:26.882183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:28.887786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:45:28.902862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-933412 -n functional-933412
helpers_test.go:269: (dbg) Run:  kubectl --context functional-933412 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799: exit status 1 (94.681144ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 20 Nov 2025 20:40:25 +0000
	      Finished:     Thu, 20 Nov 2025 20:40:26 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62gmv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-62gmv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m7s  default-scheduler  Successfully assigned default/busybox-mount to functional-933412
	  Normal  Pulling    6m7s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m5s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.495s (1m2.514s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m5s  kubelet            Created container: mount-munger
	  Normal  Started    5m4s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-2dthj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qql4l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qql4l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason       Age                   From               Message
	  ----     ------       ----                  ----               -------
	  Normal   Scheduled    6m9s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dthj to functional-933412
	  Warning  FailedMount  6m8s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-qql4l" : failed to sync configmap cache: timed out waiting for the condition
	  Warning  Failed       2m32s (x2 over 5m7s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       2m32s (x2 over 5m7s)  kubelet            Error: ErrImagePull
	  Normal   BackOff      2m17s (x2 over 5m7s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed       2m17s (x2 over 5m7s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling      2m4s (x3 over 6m7s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ppbrm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm5wr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tm5wr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason       Age                  From               Message
	  ----     ------       ----                 ----               -------
	  Normal   Scheduled    6m9s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppbrm to functional-933412
	  Warning  FailedMount  6m8s                 kubelet            MountVolume.SetUp failed for volume "kube-api-access-tm5wr" : failed to sync configmap cache: timed out waiting for the condition
	  Warning  Failed       5m37s                kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       90s (x3 over 5m37s)  kubelet            Error: ErrImagePull
	  Warning  Failed       90s (x2 over 4m2s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff      62s (x4 over 5m37s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed       62s (x4 over 5m37s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling      47s (x4 over 6m7s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-czlbc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-czlbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-933412
	  Warning  Failed     2m (x2 over 4m32s)    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m (x2 over 4m32s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    110s (x2 over 4m32s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     110s (x2 over 4m32s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    98s (x3 over 6m2s)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-97v7k" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-w4799" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-933412 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-77r7s" [4e3a647a-ad51-4455-93d6-b5ad363385d8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-933412 -n functional-933412
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-11-20 20:55:37.081023324 +0000 UTC m=+2085.967312076
functional_test.go:1804: (dbg) Run:  kubectl --context functional-933412 describe po mysql-5bb876957f-77r7s -n default
functional_test.go:1804: (dbg) kubectl --context functional-933412 describe po mysql-5bb876957f-77r7s -n default:
Name:             mysql-5bb876957f-77r7s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-933412/192.168.39.212
Start Time:       Thu, 20 Nov 2025 20:45:36 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lbr6v (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lbr6v:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-77r7s to functional-933412
Warning  Failed     93s (x3 over 6m36s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     93s (x3 over 6m36s)  kubelet            Error: ErrImagePull
Normal   BackOff    55s (x5 over 6m36s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     55s (x5 over 6m36s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    40s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-933412 logs mysql-5bb876957f-77r7s -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-933412 logs mysql-5bb876957f-77r7s -n default: exit status 1 (74.113129ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-77r7s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-933412 logs mysql-5bb876957f-77r7s -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-933412 -n functional-933412
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 logs -n 25: (1.390049537s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp             │ functional-933412 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh -n functional-933412 sudo cat /home/docker/cp-test.txt                                               │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls                                                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ cp             │ functional-933412 cp functional-933412:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2552378709/001/cp-test.txt │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image save --daemon kicbase/echo-server:functional-933412 --alsologtostderr                              │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh -n functional-933412 sudo cat /home/docker/cp-test.txt                                               │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ cp             │ functional-933412 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh -n functional-933412 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh echo hello                                                                                           │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh cat /etc/hostname                                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ update-context │ functional-933412 update-context --alsologtostderr -v=2                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ update-context │ functional-933412 update-context --alsologtostderr -v=2                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ update-context │ functional-933412 update-context --alsologtostderr -v=2                                                                    │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls --format short --alsologtostderr                                                                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls --format yaml --alsologtostderr                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ ssh            │ functional-933412 ssh pgrep buildkitd                                                                                      │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │                     │
	│ image          │ functional-933412 image build -t localhost/my-image:functional-933412 testdata/build --alsologtostderr                     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls                                                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls --format json --alsologtostderr                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ image          │ functional-933412 image ls --format table --alsologtostderr                                                                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:45 UTC │ 20 Nov 25 20:45 UTC │
	│ service        │ functional-933412 service list                                                                                             │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:49 UTC │ 20 Nov 25 20:49 UTC │
	│ service        │ functional-933412 service list -o json                                                                                     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:49 UTC │ 20 Nov 25 20:49 UTC │
	│ service        │ functional-933412 service --namespace=default --https --url hello-node                                                     │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:49 UTC │                     │
	│ service        │ functional-933412 service hello-node --url --format={{.IP}}                                                                │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:49 UTC │                     │
	│ service        │ functional-933412 service hello-node --url                                                                                 │ functional-933412 │ jenkins │ v1.37.0 │ 20 Nov 25 20:49 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:40:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:40:31.985595   18511 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:40:31.985883   18511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.985893   18511 out.go:374] Setting ErrFile to fd 2...
	I1120 20:40:31.985900   18511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.986117   18511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:40:31.986549   18511 out.go:368] Setting JSON to false
	I1120 20:40:31.987404   18511 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1382,"bootTime":1763669850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:40:31.987455   18511 start.go:143] virtualization: kvm guest
	I1120 20:40:31.989149   18511 out.go:179] * [functional-933412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:40:31.990360   18511 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:40:31.990348   18511 notify.go:221] Checking for updates...
	I1120 20:40:31.992108   18511 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:40:31.993326   18511 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:40:31.994559   18511 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:40:31.999027   18511 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:40:32.000067   18511 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:40:32.001461   18511 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:40:32.001869   18511 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:40:32.032149   18511 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 20:40:32.033200   18511 start.go:309] selected driver: kvm2
	I1120 20:40:32.033212   18511 start.go:930] validating driver "kvm2" against &{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:32.033322   18511 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:40:32.034235   18511 cni.go:84] Creating CNI manager for ""
	I1120 20:40:32.034287   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:40:32.034333   18511 start.go:353] cluster config:
	{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:32.035594   18511 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.843760138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08e60e6e-af74-4998-b6d7-26baa8eac54c name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.844260329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08e60e6e-af74-4998-b6d7-26baa8eac54c name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.884834782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac996bc5-7e25-4c3d-bbcb-6f4e8fb6d16c name=/runtime.v1.RuntimeService/Version
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.884926837Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac996bc5-7e25-4c3d-bbcb-6f4e8fb6d16c name=/runtime.v1.RuntimeService/Version
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.886410373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a984fc6d-5e36-471b-be98-c498e7e52fd2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.887136879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763672137887112177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a984fc6d-5e36-471b-be98-c498e7e52fd2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.888451670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=701a9efd-f43d-4dfc-8660-987c4a335478 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.888581576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=701a9efd-f43d-4dfc-8660-987c4a335478 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.889091160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=701a9efd-f43d-4dfc-8660-987c4a335478 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.919777592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4fd3a32-9eb3-4744-aa1d-c0dd42cd135e name=/runtime.v1.RuntimeService/Version
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.919875894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4fd3a32-9eb3-4744-aa1d-c0dd42cd135e name=/runtime.v1.RuntimeService/Version
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.921318941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68b5913a-92e8-47a6-9dc3-90579044aa97 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.922042979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763672137922018767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68b5913a-92e8-47a6-9dc3-90579044aa97 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.922852103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96a355e8-128d-4904-ad33-c06b6498ee10 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.922929521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96a355e8-128d-4904-ad33-c06b6498ee10 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.923305562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96a355e8-128d-4904-ad33-c06b6498ee10 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.947423636Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=452f6084-ed1a-4552-a17e-967904102553 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.947528534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=452f6084-ed1a-4552-a17e-967904102553 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.957319729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dff53801-7966-4712-817b-823a206228c5 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.957404530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dff53801-7966-4712-817b-823a206228c5 name=/runtime.v1.RuntimeService/Version
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.959624829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=772fcf61-b30b-4b07-a4e4-57119ac08d1c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.960378736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763672137960352754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=772fcf61-b30b-4b07-a4e4-57119ac08d1c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.961614099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb295dfc-0756-42df-a28b-f3414cef7114 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.962060602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb295dfc-0756-42df-a28b-f3414cef7114 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 20:55:37 functional-933412 crio[5444]: time="2025-11-20 20:55:37.962616649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec,PodSandboxId:9d5410f166b27f7290149eaf010988ff34eb10c7da4fe7642931650553345704,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763671225911548564,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18caa9ca-3098-4bdc-baf5-124bf98f0577,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763671137286530799,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763671137267045266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a,PodSandboxId:e94448a6ba4f624ed0ada1da64d1cc4c92319fe5a75793f7fff548dec0121800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763671133907944852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72af6a97b1729f619e770ceba1822a32,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763671133675349726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eebdb6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763671133589370796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763671133592929227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kube
rnetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b,PodSandboxId:f1635aeef3e54b815ea4a3b8ac2a73ad7320d948a932f5712bb583b82bbf6821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_
EXITED,CreatedAt:1763671114332962613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903793d0-3eb5-4f21-a0c6-580ef4002705,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0,PodSandboxId:abafba18405848c7c1c917ffe664bee487cf99c35802b17c267c8305b88d1d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763671
111789329287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952,PodSandboxId:d61277493dd5b2d9aa8babc2058103150f8265fd13648eeb
db6459183e02d651,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763671108425980933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0e662c9a19ce79799b109de1b1f4882,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cd
c6e2759d64e3b6d,PodSandboxId:7026443238fe7850cb7366d5c169dd35f1a8ca3e382ddc5cd3c2323547f6acc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763671108322328407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3296ba79b34d04995df213bff57a006e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99,PodSandboxId:512c473573a1e94f039a87c5672d071f90eb941c07707910c442e9bf050b72f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763671108279605095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-933412,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f9cf737a5ee47db4d863c4c536829,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae,PodSandboxId:1f6a7ab0c91bae6afa6c78eff6aec322d91eed35f3bf58278b9b5d39aac4af53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763671108243463622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e68395-250c-46fc-8028-1e9e2456ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32,PodSandboxId:a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763671072119799527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2b7p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952cefb1-c3e7-481c-bb72-d7f96fde7bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb295dfc-0756-42df-a28b-f3414cef7114 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	109d4bb80eac7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   15 minutes ago      Exited              mount-munger              0                   9d5410f166b27       busybox-mount                               default
	2742e68c74423       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      16 minutes ago      Running             kube-proxy                3                   1f6a7ab0c91ba       kube-proxy-6xnj6                            kube-system
	673e5b087a6e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Running             storage-provisioner       3                   f1635aeef3e54       storage-provisioner                         kube-system
	564bc1707bc93       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      16 minutes ago      Running             kube-apiserver            0                   e94448a6ba4f6       kube-apiserver-functional-933412            kube-system
	90434f3698428       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      16 minutes ago      Running             kube-controller-manager   3                   7026443238fe7       kube-controller-manager-functional-933412   kube-system
	22f1327d1dafb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      16 minutes ago      Running             kube-scheduler            3                   512c473573a1e       kube-scheduler-functional-933412            kube-system
	52d08d12cb18e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      16 minutes ago      Running             etcd                      3                   d61277493dd5b       etcd-functional-933412                      kube-system
	01fbf4a1da609       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       2                   f1635aeef3e54       storage-provisioner                         kube-system
	71d72227e095d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      17 minutes ago      Running             coredns                   2                   abafba1840584       coredns-66bc5c9577-2b7p9                    kube-system
	e2153be7a1118       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      17 minutes ago      Exited              etcd                      2                   d61277493dd5b       etcd-functional-933412                      kube-system
	ac98f1d3b4d98       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      17 minutes ago      Exited              kube-controller-manager   2                   7026443238fe7       kube-controller-manager-functional-933412   kube-system
	06fbf273e2f27       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      17 minutes ago      Exited              kube-scheduler            2                   512c473573a1e       kube-scheduler-functional-933412            kube-system
	2a0c4fbb9b5d7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      17 minutes ago      Exited              kube-proxy                2                   1f6a7ab0c91ba       kube-proxy-6xnj6                            kube-system
	2f33930aaa277       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      17 minutes ago      Exited              coredns                   1                   a27089054680f       coredns-66bc5c9577-2b7p9                    kube-system
	
	
	==> coredns [2f33930aaa277443782a10506e2dcf54e186cd61cb81e27bb4a7c49e89d7ae32] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40383 - 36748 "HINFO IN 5820690942743418349.4099311619110396990. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022726285s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [71d72227e095d626a2ed00107da0403d038c7a037bb429e60a96275298e88fc0] <==
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45883 - 50568 "HINFO IN 5877995768124618261.4169147699699731941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.421086977s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41888->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41918->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41904->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-933412
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-933412
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=functional-933412
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-933412
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:55:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:52:44 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:52:44 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:52:44 +0000   Thu, 20 Nov 2025 20:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:52:44 +0000   Thu, 20 Nov 2025 20:37:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    functional-933412
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 acb893993e724fd68d51aa75dbb6007a
	  System UUID:                acb89399-3e72-4fd6-8d51-aa75dbb6007a
	  Boot ID:                    8a667e28-1db3-4eb5-acb8-0cecc80439c5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2dthj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     hello-node-connect-7d85dfc575-ppbrm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     mysql-5bb876957f-77r7s                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-2b7p9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     18m
	  kube-system                 etcd-functional-933412                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         18m
	  kube-system                 kube-apiserver-functional-933412              250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-933412     200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-6xnj6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-functional-933412              100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-97v7k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-w4799         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeReady                18m                kubelet          Node functional-933412 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-933412 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-933412 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-933412 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node functional-933412 event: Registered Node functional-933412 in Controller
	
	
	==> dmesg <==
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081853] kauditd_printk_skb: 1 callbacks suppressed
	[Nov20 20:37] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.149691] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.660350] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.929408] kauditd_printk_skb: 249 callbacks suppressed
	[  +0.111579] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.014171] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.566090] kauditd_printk_skb: 176 callbacks suppressed
	[Nov20 20:38] kauditd_printk_skb: 131 callbacks suppressed
	[  +0.114496] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.272570] kauditd_printk_skb: 182 callbacks suppressed
	[  +2.672253] kauditd_printk_skb: 229 callbacks suppressed
	[  +6.943960] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.127034] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.126294] kauditd_printk_skb: 121 callbacks suppressed
	[Nov20 20:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.055294] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.001227] kauditd_printk_skb: 104 callbacks suppressed
	[ +25.129996] kauditd_printk_skb: 26 callbacks suppressed
	[Nov20 20:40] kauditd_printk_skb: 29 callbacks suppressed
	[Nov20 20:41] kauditd_printk_skb: 68 callbacks suppressed
	[Nov20 20:45] crun[10216]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[Nov20 20:49] kauditd_printk_skb: 42 callbacks suppressed
	
	
	==> etcd [52d08d12cb18e50874afbf11b9cf4b9e0bb4ac5eb9f26498458572f478c28be4] <==
	{"level":"warn","ts":"2025-11-20T20:38:55.578629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.584498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.598770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.615999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.627387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.635890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.644879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.658598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.671138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.678504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.686422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.693713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.703372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.716480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.726224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.743373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.760117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.770191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:38:55.861274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46834","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T20:48:55.064000Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1045}
	{"level":"info","ts":"2025-11-20T20:48:55.095821Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1045,"took":"31.410952ms","hash":2931667262,"current-db-size-bytes":3395584,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1482752,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-20T20:48:55.095865Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2931667262,"revision":1045,"compact-revision":-1}
	{"level":"info","ts":"2025-11-20T20:53:55.072904Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1378}
	{"level":"info","ts":"2025-11-20T20:53:55.076787Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1378,"took":"3.544955ms","hash":24426240,"current-db-size-bytes":3395584,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":2240512,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-11-20T20:53:55.076886Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":24426240,"revision":1378,"compact-revision":1045}
	
	
	==> etcd [e2153be7a11180d3bd6a067b4abe4ef8bb6c688c4b6fab5e53fd8643d8688952] <==
	{"level":"info","ts":"2025-11-20T20:38:29.355734Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","recovered-remote-peer-id":"eed9c28654b6490f","recovered-remote-peer-urls":["https://192.168.39.212:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:38:29.356742Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.356755Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-20T20:38:29.356796Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-20T20:38:29.356888Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"eed9c28654b6490f switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-20T20:38:29.356945Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"eed9c28654b6490f became follower at term 3"}
	{"level":"info","ts":"2025-11-20T20:38:29.356975Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft eed9c28654b6490f [peers: [], term: 3, commit: 552, applied: 0, lastindex: 552, lastterm: 3]"}
	{"level":"warn","ts":"2025-11-20T20:38:29.364020Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-20T20:38:29.391830Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":511}
	{"level":"info","ts":"2025-11-20T20:38:29.401745Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-20T20:38:29.403842Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"eed9c28654b6490f","timeout":"7s"}
	{"level":"info","ts":"2025-11-20T20:38:29.405934Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"eed9c28654b6490f"}
	{"level":"info","ts":"2025-11-20T20:38:29.406004Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"eed9c28654b6490f","local-server-version":"3.6.4","cluster-id":"f8d3b95e5bbb719c","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.406949Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"eed9c28654b6490f","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407041Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407109Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407119Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T20:38:29.407311Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"eed9c28654b6490f switched to configuration voters=(17211001333175699727)"}
	{"level":"info","ts":"2025-11-20T20:38:29.407396Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","added-peer-id":"eed9c28654b6490f","added-peer-peer-urls":["https://192.168.39.212:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:38:29.407490Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-20T20:38:29.414334Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T20:38:29.418300Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"eed9c28654b6490f","initial-advertise-peer-urls":["https://192.168.39.212:2380"],"listen-peer-urls":["https://192.168.39.212:2380"],"advertise-client-urls":["https://192.168.39.212:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.212:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T20:38:29.418370Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T20:38:29.418498Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2025-11-20T20:38:29.418530Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.212:2380"}
	
	
	==> kernel <==
	 20:55:38 up 18 min,  0 users,  load average: 0.48, 0.36, 0.29
	Linux functional-933412 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [564bc1707bc9307784bea7458e4621916889a35a232fe51a4d9275430d61798a] <==
	I1120 20:38:56.715837       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 20:38:56.717113       1 aggregator.go:171] initial CRD sync complete...
	I1120 20:38:56.717416       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 20:38:56.717505       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 20:38:56.717522       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:38:56.725574       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:38:56.725821       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1120 20:38:56.748285       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 20:38:57.026303       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:38:57.489472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:38:58.325699       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:38:58.364821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:38:58.390567       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:38:58.397107       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:39:00.061709       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:39:00.112371       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:39:00.362096       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:39:16.327502       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.145.237"}
	I1120 20:39:20.910863       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.37.109"}
	I1120 20:39:20.942215       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.60.10"}
	I1120 20:40:32.961131       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:40:33.299327       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.87.218"}
	I1120 20:40:33.323787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.190.79"}
	I1120 20:45:36.774747       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.175.232"}
	I1120 20:48:56.632121       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [90434f36984286146dd3d86c58412231565cbf27ffc1150651b653de2eeaaf17] <==
	I1120 20:38:59.997635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:39:00.006780       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 20:39:00.006997       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 20:39:00.007087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:39:00.007130       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:39:00.007148       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:39:00.007149       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:39:00.008479       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:39:00.009407       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 20:39:00.012086       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:39:00.012538       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:39:00.019115       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 20:39:00.032896       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 20:39:00.039254       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 20:39:00.041533       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:39:00.042922       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:39:00.048280       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 20:39:00.055113       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1120 20:40:33.053616       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.093742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.097347       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.109865       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.113801       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.127931       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:40:33.129241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ac98f1d3b4d98774cdebbc569a8265eacdfce1afaeae519cdc6e2759d64e3b6d] <==
	
	
	==> kube-proxy [2742e68c74423ea8da2c67c16e59819bb24b4efa1b29c78d7df938eae784c262] <==
	I1120 20:38:57.537634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:38:57.639095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:38:57.639246       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.212"]
	E1120 20:38:57.639542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:38:57.744439       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1120 20:38:57.744545       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 20:38:57.744583       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:38:57.758377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:38:57.760174       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:38:57.760235       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:38:57.767043       1 config.go:200] "Starting service config controller"
	I1120 20:38:57.767096       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:38:57.767126       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:38:57.767140       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:38:57.767160       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:38:57.767173       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:38:57.767955       1 config.go:309] "Starting node config controller"
	I1120 20:38:57.767998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:38:57.768014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:38:57.867737       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 20:38:57.867768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:38:57.867742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [2a0c4fbb9b5d7f38b11ba69bd620f3eef8112fd96f6aceac5f0df2dde2b755ae] <==
	I1120 20:38:28.682607       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:38:28.797288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1120 20:38:28.818903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-933412&limit=500&resourceVersion=0\": dial tcp 192.168.39.212:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:38:40.372759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-933412&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [06fbf273e2f2745ad4e4fb02bc0603b4457bafc3fb8fc9fb37afc1bc8622eb99] <==
	I1120 20:38:31.345686       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [22f1327d1dafbd4fd0b7fc06ed6783f13b530b7a0038e94c6957ec266ab7103b] <==
	I1120 20:38:55.103741       1 serving.go:386] Generated self-signed cert in-memory
	I1120 20:38:56.746614       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 20:38:56.746746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:38:56.758093       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 20:38:56.758871       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 20:38:56.758924       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 20:38:56.758964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 20:38:56.760186       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.760330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.760231       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 20:38:56.766036       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 20:38:56.859179       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 20:38:56.861243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:38:56.871189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:54:53 functional-933412 kubelet[6715]: E1120 20:54:53.094172    6715 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod952cefb1-c3e7-481c-bb72-d7f96fde7bd9/crio-a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1: Error finding container a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1: Status 404 returned error can't find the container with id a27089054680f0a5d332250f397d18c1fceb8c18e5bee922e92da93555cdebb1
	Nov 20 20:54:53 functional-933412 kubelet[6715]: E1120 20:54:53.351341    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763672093351000308  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:54:53 functional-933412 kubelet[6715]: E1120 20:54:53.351363    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763672093351000308  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:54:55 functional-933412 kubelet[6715]: E1120 20:54:55.972293    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dthj" podUID="be97d2c4-1a44-4335-89c3-8e28cceea1a0"
	Nov 20 20:55:02 functional-933412 kubelet[6715]: E1120 20:55:02.978106    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-w4799" podUID="753b6014-a9c2-4e38-9016-1adac90b4a77"
	Nov 20 20:55:03 functional-933412 kubelet[6715]: E1120 20:55:03.353859    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763672103353435784  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:55:03 functional-933412 kubelet[6715]: E1120 20:55:03.353881    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763672103353435784  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:55:03 functional-933412 kubelet[6715]: E1120 20:55:03.972410    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3c972cf3-8435-4a39-8c33-cc134f096e49"
	Nov 20 20:55:04 functional-933412 kubelet[6715]: E1120 20:55:04.911113    6715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 20 20:55:04 functional-933412 kubelet[6715]: E1120 20:55:04.911162    6715 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 20 20:55:04 functional-933412 kubelet[6715]: E1120 20:55:04.911308    6715 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-97v7k_kubernetes-dashboard(4811a1cd-b896-49a4-8130-01de71cc2b82): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 20 20:55:04 functional-933412 kubelet[6715]: E1120 20:55:04.911339    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:55:09 functional-933412 kubelet[6715]: E1120 20:55:09.972254    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dthj" podUID="be97d2c4-1a44-4335-89c3-8e28cceea1a0"
	Nov 20 20:55:13 functional-933412 kubelet[6715]: E1120 20:55:13.355927    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763672113355629355  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:55:13 functional-933412 kubelet[6715]: E1120 20:55:13.355968    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763672113355629355  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:55:13 functional-933412 kubelet[6715]: E1120 20:55:13.973414    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-w4799" podUID="753b6014-a9c2-4e38-9016-1adac90b4a77"
	Nov 20 20:55:14 functional-933412 kubelet[6715]: E1120 20:55:14.972425    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3c972cf3-8435-4a39-8c33-cc134f096e49"
	Nov 20 20:55:16 functional-933412 kubelet[6715]: E1120 20:55:16.980035    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:55:23 functional-933412 kubelet[6715]: E1120 20:55:23.357880    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763672123357418525  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:55:23 functional-933412 kubelet[6715]: E1120 20:55:23.357922    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763672123357418525  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:55:25 functional-933412 kubelet[6715]: E1120 20:55:25.972445    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3c972cf3-8435-4a39-8c33-cc134f096e49"
	Nov 20 20:55:28 functional-933412 kubelet[6715]: E1120 20:55:28.975764    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-w4799" podUID="753b6014-a9c2-4e38-9016-1adac90b4a77"
	Nov 20 20:55:31 functional-933412 kubelet[6715]: E1120 20:55:31.974495    6715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-97v7k" podUID="4811a1cd-b896-49a4-8130-01de71cc2b82"
	Nov 20 20:55:33 functional-933412 kubelet[6715]: E1120 20:55:33.359352    6715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763672133359073534  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Nov 20 20:55:33 functional-933412 kubelet[6715]: E1120 20:55:33.359371    6715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763672133359073534  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	
	
	==> storage-provisioner [01fbf4a1da6099e3d031d9d6c9675cfb69ec9e9ef89363a6ce3751a24003050b] <==
	I1120 20:38:34.406720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 20:38:44.409322       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: TLS handshake timeout
	
	
	==> storage-provisioner [673e5b087a6e15ebc05016c21b9f89afe87feeee77151edc71cf57953e5f4a3a] <==
	W1120 20:55:14.057140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:16.060987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:16.065757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:18.068717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:18.076435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:20.080270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:20.085496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:22.088545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:22.094869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:24.099073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:24.104585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:26.109034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:26.118840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:28.122927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:28.127567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:30.131302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:30.138532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:32.142439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:32.148092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:34.152079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:34.160631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:36.164322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:36.169902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:38.175088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:55:38.186064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-933412 -n functional-933412
helpers_test.go:269: (dbg) Run:  kubectl --context functional-933412 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm mysql-5bb876957f-77r7s sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm mysql-5bb876957f-77r7s sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm mysql-5bb876957f-77r7s sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799: exit status 1 (104.258627ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://109d4bb80eac7f93914ca8338e2f4f6a414795a90b965b62cedac4a99500abec
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 20 Nov 2025 20:40:25 +0000
	      Finished:     Thu, 20 Nov 2025 20:40:26 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62gmv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-62gmv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  16m   default-scheduler  Successfully assigned default/busybox-mount to functional-933412
	  Normal  Pulling    16m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.495s (1m2.514s including waiting). Image size: 4631262 bytes.
	  Normal  Created    15m   kubelet            Created container: mount-munger
	  Normal  Started    15m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-2dthj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qql4l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qql4l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason       Age                  From               Message
	  ----     ------       ----                 ----               -------
	  Normal   Scheduled    16m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dthj to functional-933412
	  Warning  FailedMount  16m                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-qql4l" : failed to sync configmap cache: timed out waiting for the condition
	  Warning  Failed       9m38s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling      4m37s (x5 over 16m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed       3m5s (x4 over 15m)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       3m5s (x5 over 15m)   kubelet            Error: ErrImagePull
	  Warning  Failed       73s (x20 over 15m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff      30s (x23 over 15m)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ppbrm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm5wr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tm5wr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason       Age                   From               Message
	  ----     ------       ----                  ----               -------
	  Normal   Scheduled    16m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppbrm to functional-933412
	  Warning  FailedMount  16m                   kubelet            MountVolume.SetUp failed for volume "kube-api-access-tm5wr" : failed to sync configmap cache: timed out waiting for the condition
	  Warning  Failed       15m                   kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       8m38s (x3 over 14m)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling      7m12s (x5 over 16m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed       4m35s (x5 over 15m)   kubelet            Error: ErrImagePull
	  Warning  Failed       4m35s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed       3m21s (x16 over 15m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff      2m5s (x22 over 15m)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-77r7s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:45:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lbr6v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lbr6v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-77r7s to functional-933412
	  Warning  Failed     95s (x3 over 6m38s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     95s (x3 over 6m38s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    57s (x5 over 6m38s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     57s (x5 over 6m38s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    42s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933412/192.168.39.212
	Start Time:       Thu, 20 Nov 2025 20:39:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-czlbc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-czlbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  16m                   default-scheduler  Successfully assigned default/sp-pod to functional-933412
	  Warning  Failed     5m38s (x2 over 9m8s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m12s (x5 over 16m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m5s (x3 over 14m)    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m5s (x5 over 14m)    kubelet            Error: ErrImagePull
	  Warning  Failed     50s (x17 over 14m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    14s (x20 over 14m)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-97v7k" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-w4799" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-933412 describe pod busybox-mount hello-node-75c85bcc94-2dthj hello-node-connect-7d85dfc575-ppbrm mysql-5bb876957f-77r7s sp-pod dashboard-metrics-scraper-77bf4d6c4c-97v7k kubernetes-dashboard-855c9754f9-w4799: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-933412 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-933412 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-2dthj" [be97d2c4-1a44-4335-89c3-8e28cceea1a0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-933412 -n functional-933412
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-20 20:49:21.205798671 +0000 UTC m=+1710.092087423
functional_test.go:1460: (dbg) Run:  kubectl --context functional-933412 describe po hello-node-75c85bcc94-2dthj -n default
functional_test.go:1460: (dbg) kubectl --context functional-933412 describe po hello-node-75c85bcc94-2dthj -n default:
Name:             hello-node-75c85bcc94-2dthj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-933412/192.168.39.212
Start Time:       Thu, 20 Nov 2025 20:39:20 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qql4l (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qql4l:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                    From               Message
----     ------       ----                   ----               -------
Normal   Scheduled    10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dthj to functional-933412
Warning  FailedMount  9m59s                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-qql4l" : failed to sync configmap cache: timed out waiting for the condition
Warning  Failed       6m23s (x2 over 8m58s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed       3m20s (x3 over 8m58s)  kubelet            Error: ErrImagePull
Warning  Failed       3m20s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff      2m46s (x5 over 8m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed       2m46s (x5 over 8m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling      2m31s (x4 over 9m58s)  kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-933412 logs hello-node-75c85bcc94-2dthj -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-933412 logs hello-node-75c85bcc94-2dthj -n default: exit status 1 (68.616968ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-2dthj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-933412 logs hello-node-75c85bcc94-2dthj -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 service --namespace=default --https --url hello-node: exit status 115 (238.747798ms)

                                                
                                                
-- stdout --
	https://192.168.39.212:31972
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-933412 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 service hello-node --url --format={{.IP}}: exit status 115 (237.933975ms)

                                                
                                                
-- stdout --
	192.168.39.212
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-933412 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 service hello-node --url: exit status 115 (232.946713ms)

                                                
                                                
-- stdout --
	http://192.168.39.212:31972
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-933412 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.212:31972
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.23s)

                                                
                                    
x
+
TestPreload (132.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-787681 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1120 21:32:23.399097    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-787681 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m7.770182706s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-787681 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-787681 image pull gcr.io/k8s-minikube/busybox: (2.509511263s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-787681
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-787681: (8.623138408s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-787681 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1120 21:34:20.328339    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-787681 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (50.863334003s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-787681 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-20 21:34:22.627607706 +0000 UTC m=+4411.513896457
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-787681 -n test-preload-787681
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-787681 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-787681 logs -n 25: (1.03077249s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-213052 ssh -n multinode-213052-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:20 UTC │ 20 Nov 25 21:20 UTC │
	│ ssh     │ multinode-213052 ssh -n multinode-213052 sudo cat /home/docker/cp-test_multinode-213052-m03_multinode-213052.txt                                          │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:20 UTC │ 20 Nov 25 21:20 UTC │
	│ cp      │ multinode-213052 cp multinode-213052-m03:/home/docker/cp-test.txt multinode-213052-m02:/home/docker/cp-test_multinode-213052-m03_multinode-213052-m02.txt │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:20 UTC │ 20 Nov 25 21:20 UTC │
	│ ssh     │ multinode-213052 ssh -n multinode-213052-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:20 UTC │ 20 Nov 25 21:20 UTC │
	│ ssh     │ multinode-213052 ssh -n multinode-213052-m02 sudo cat /home/docker/cp-test_multinode-213052-m03_multinode-213052-m02.txt                                  │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:20 UTC │ 20 Nov 25 21:20 UTC │
	│ node    │ multinode-213052 node stop m03                                                                                                                            │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:20 UTC │ 20 Nov 25 21:20 UTC │
	│ node    │ multinode-213052 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:20 UTC │ 20 Nov 25 21:21 UTC │
	│ node    │ list -p multinode-213052                                                                                                                                  │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p multinode-213052                                                                                                                                       │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:24 UTC │
	│ start   │ -p multinode-213052 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:24 UTC │ 20 Nov 25 21:26 UTC │
	│ node    │ list -p multinode-213052                                                                                                                                  │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:26 UTC │                     │
	│ node    │ multinode-213052 node delete m03                                                                                                                          │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:26 UTC │ 20 Nov 25 21:26 UTC │
	│ stop    │ multinode-213052 stop                                                                                                                                     │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:26 UTC │ 20 Nov 25 21:29 UTC │
	│ start   │ -p multinode-213052 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:29 UTC │ 20 Nov 25 21:31 UTC │
	│ node    │ list -p multinode-213052                                                                                                                                  │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ start   │ -p multinode-213052-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-213052-m02 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │                     │
	│ start   │ -p multinode-213052-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-213052-m03 │ jenkins │ v1.37.0 │ 20 Nov 25 21:31 UTC │ 20 Nov 25 21:32 UTC │
	│ node    │ add -p multinode-213052                                                                                                                                   │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:32 UTC │                     │
	│ delete  │ -p multinode-213052-m03                                                                                                                                   │ multinode-213052-m03 │ jenkins │ v1.37.0 │ 20 Nov 25 21:32 UTC │ 20 Nov 25 21:32 UTC │
	│ delete  │ -p multinode-213052                                                                                                                                       │ multinode-213052     │ jenkins │ v1.37.0 │ 20 Nov 25 21:32 UTC │ 20 Nov 25 21:32 UTC │
	│ start   │ -p test-preload-787681 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-787681  │ jenkins │ v1.37.0 │ 20 Nov 25 21:32 UTC │ 20 Nov 25 21:33 UTC │
	│ image   │ test-preload-787681 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-787681  │ jenkins │ v1.37.0 │ 20 Nov 25 21:33 UTC │ 20 Nov 25 21:33 UTC │
	│ stop    │ -p test-preload-787681                                                                                                                                    │ test-preload-787681  │ jenkins │ v1.37.0 │ 20 Nov 25 21:33 UTC │ 20 Nov 25 21:33 UTC │
	│ start   │ -p test-preload-787681 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-787681  │ jenkins │ v1.37.0 │ 20 Nov 25 21:33 UTC │ 20 Nov 25 21:34 UTC │
	│ image   │ test-preload-787681 image list                                                                                                                            │ test-preload-787681  │ jenkins │ v1.37.0 │ 20 Nov 25 21:34 UTC │ 20 Nov 25 21:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:33:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:33:31.630038   38602 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:33:31.630171   38602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:33:31.630179   38602 out.go:374] Setting ErrFile to fd 2...
	I1120 21:33:31.630187   38602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:33:31.630375   38602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:33:31.630817   38602 out.go:368] Setting JSON to false
	I1120 21:33:31.631671   38602 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4562,"bootTime":1763669850,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:33:31.631765   38602 start.go:143] virtualization: kvm guest
	I1120 21:33:31.634078   38602 out.go:179] * [test-preload-787681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:33:31.635397   38602 notify.go:221] Checking for updates...
	I1120 21:33:31.635431   38602 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:33:31.637096   38602 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:33:31.638458   38602 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:33:31.639875   38602 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 21:33:31.641286   38602 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:33:31.642534   38602 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:33:31.644145   38602 config.go:182] Loaded profile config "test-preload-787681": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1120 21:33:31.645872   38602 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1120 21:33:31.647137   38602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:33:31.680443   38602 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 21:33:31.681740   38602 start.go:309] selected driver: kvm2
	I1120 21:33:31.681754   38602 start.go:930] validating driver "kvm2" against &{Name:test-preload-787681 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-787681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:33:31.681872   38602 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:33:31.682805   38602 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:33:31.682884   38602 cni.go:84] Creating CNI manager for ""
	I1120 21:33:31.682941   38602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:33:31.683000   38602 start.go:353] cluster config:
	{Name:test-preload-787681 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-787681 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:33:31.683098   38602 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:33:31.684546   38602 out.go:179] * Starting "test-preload-787681" primary control-plane node in "test-preload-787681" cluster
	I1120 21:33:31.685830   38602 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1120 21:33:31.706606   38602 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1120 21:33:31.706660   38602 cache.go:65] Caching tarball of preloaded images
	I1120 21:33:31.706829   38602 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1120 21:33:31.708768   38602 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1120 21:33:31.710084   38602 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1120 21:33:31.736430   38602 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1120 21:33:31.736489   38602 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1120 21:33:34.245652   38602 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1120 21:33:34.245793   38602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/config.json ...
	I1120 21:33:34.246033   38602 start.go:360] acquireMachinesLock for test-preload-787681: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 21:33:34.246091   38602 start.go:364] duration metric: took 37.841µs to acquireMachinesLock for "test-preload-787681"
	I1120 21:33:34.246107   38602 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:33:34.246111   38602 fix.go:54] fixHost starting: 
	I1120 21:33:34.247898   38602 fix.go:112] recreateIfNeeded on test-preload-787681: state=Stopped err=<nil>
	W1120 21:33:34.247923   38602 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:33:34.249841   38602 out.go:252] * Restarting existing kvm2 VM for "test-preload-787681" ...
	I1120 21:33:34.249911   38602 main.go:143] libmachine: starting domain...
	I1120 21:33:34.249923   38602 main.go:143] libmachine: ensuring networks are active...
	I1120 21:33:34.250638   38602 main.go:143] libmachine: Ensuring network default is active
	I1120 21:33:34.250996   38602 main.go:143] libmachine: Ensuring network mk-test-preload-787681 is active
	I1120 21:33:34.251395   38602 main.go:143] libmachine: getting domain XML...
	I1120 21:33:34.252375   38602 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-787681</name>
	  <uuid>3f5035eb-a31b-4dbd-b091-f5869427f1ba</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/test-preload-787681/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21923-3793/.minikube/machines/test-preload-787681/test-preload-787681.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:cd:15:ff'/>
	      <source network='mk-test-preload-787681'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:f0:6c:bf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1120 21:33:35.518732   38602 main.go:143] libmachine: waiting for domain to start...
	I1120 21:33:35.520255   38602 main.go:143] libmachine: domain is now running
	I1120 21:33:35.520275   38602 main.go:143] libmachine: waiting for IP...
	I1120 21:33:35.521052   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:35.521559   38602 main.go:143] libmachine: domain test-preload-787681 has current primary IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:35.521572   38602 main.go:143] libmachine: found domain IP: 192.168.39.223
	I1120 21:33:35.521578   38602 main.go:143] libmachine: reserving static IP address...
	I1120 21:33:35.521988   38602 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-787681", mac: "52:54:00:cd:15:ff", ip: "192.168.39.223"} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:32:29 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:35.522029   38602 main.go:143] libmachine: skip adding static IP to network mk-test-preload-787681 - found existing host DHCP lease matching {name: "test-preload-787681", mac: "52:54:00:cd:15:ff", ip: "192.168.39.223"}
	I1120 21:33:35.522046   38602 main.go:143] libmachine: reserved static IP address 192.168.39.223 for domain test-preload-787681
	I1120 21:33:35.522053   38602 main.go:143] libmachine: waiting for SSH...
	I1120 21:33:35.522061   38602 main.go:143] libmachine: Getting to WaitForSSH function...
	I1120 21:33:35.524390   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:35.524728   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:32:29 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:35.524748   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:35.524919   38602 main.go:143] libmachine: Using SSH client type: native
	I1120 21:33:35.525170   38602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1120 21:33:35.525181   38602 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1120 21:33:38.609133   38602 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.223:22: connect: no route to host
	I1120 21:33:44.689158   38602 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.223:22: connect: no route to host
	I1120 21:33:47.690591   38602 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.223:22: connect: connection refused
	I1120 21:33:50.795792   38602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:33:50.799123   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:50.799551   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:50.799583   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:50.799832   38602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/config.json ...
	I1120 21:33:50.800124   38602 machine.go:94] provisionDockerMachine start ...
	I1120 21:33:50.802312   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:50.802664   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:50.802692   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:50.802887   38602 main.go:143] libmachine: Using SSH client type: native
	I1120 21:33:50.803112   38602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1120 21:33:50.803124   38602 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:33:50.909391   38602 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1120 21:33:50.909432   38602 buildroot.go:166] provisioning hostname "test-preload-787681"
	I1120 21:33:50.912216   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:50.912661   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:50.912686   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:50.912891   38602 main.go:143] libmachine: Using SSH client type: native
	I1120 21:33:50.913099   38602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1120 21:33:50.913111   38602 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-787681 && echo "test-preload-787681" | sudo tee /etc/hostname
	I1120 21:33:51.033966   38602 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-787681
	
	I1120 21:33:51.036932   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.037392   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.037417   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.037600   38602 main.go:143] libmachine: Using SSH client type: native
	I1120 21:33:51.037792   38602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1120 21:33:51.037807   38602 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-787681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-787681/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-787681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:33:51.154074   38602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:33:51.154105   38602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 21:33:51.154128   38602 buildroot.go:174] setting up certificates
	I1120 21:33:51.154140   38602 provision.go:84] configureAuth start
	I1120 21:33:51.157467   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.157862   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.157894   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.160092   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.160412   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.160430   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.160560   38602 provision.go:143] copyHostCerts
	I1120 21:33:51.160612   38602 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem, removing ...
	I1120 21:33:51.160626   38602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem
	I1120 21:33:51.160694   38602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 21:33:51.160834   38602 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem, removing ...
	I1120 21:33:51.160845   38602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem
	I1120 21:33:51.160909   38602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 21:33:51.161002   38602 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem, removing ...
	I1120 21:33:51.161012   38602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem
	I1120 21:33:51.161039   38602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 21:33:51.161091   38602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.test-preload-787681 san=[127.0.0.1 192.168.39.223 localhost minikube test-preload-787681]
	I1120 21:33:51.314711   38602 provision.go:177] copyRemoteCerts
	I1120 21:33:51.314773   38602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:33:51.317388   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.317793   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.317820   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.317959   38602 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/test-preload-787681/id_rsa Username:docker}
	I1120 21:33:51.399903   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:33:51.432443   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1120 21:33:51.464221   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:33:51.495560   38602 provision.go:87] duration metric: took 341.404888ms to configureAuth
	I1120 21:33:51.495587   38602 buildroot.go:189] setting minikube options for container-runtime
	I1120 21:33:51.495796   38602 config.go:182] Loaded profile config "test-preload-787681": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1120 21:33:51.498793   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.499418   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.499453   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.499646   38602 main.go:143] libmachine: Using SSH client type: native
	I1120 21:33:51.499929   38602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1120 21:33:51.499952   38602 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:33:51.741155   38602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:33:51.741183   38602 machine.go:97] duration metric: took 941.042653ms to provisionDockerMachine
	I1120 21:33:51.741194   38602 start.go:293] postStartSetup for "test-preload-787681" (driver="kvm2")
	I1120 21:33:51.741204   38602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:33:51.741257   38602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:33:51.744161   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.744546   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.744579   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.744742   38602 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/test-preload-787681/id_rsa Username:docker}
	I1120 21:33:51.827957   38602 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:33:51.833018   38602 info.go:137] Remote host: Buildroot 2025.02
	I1120 21:33:51.833042   38602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 21:33:51.833122   38602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 21:33:51.833238   38602 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem -> 77062.pem in /etc/ssl/certs
	I1120 21:33:51.833361   38602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:33:51.846177   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:33:51.876123   38602 start.go:296] duration metric: took 134.913844ms for postStartSetup
	I1120 21:33:51.876175   38602 fix.go:56] duration metric: took 17.630062091s for fixHost
	I1120 21:33:51.878780   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.879163   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.879194   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.879352   38602 main.go:143] libmachine: Using SSH client type: native
	I1120 21:33:51.879566   38602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1120 21:33:51.879579   38602 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 21:33:51.985704   38602 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763674431.947225717
	
	I1120 21:33:51.985728   38602 fix.go:216] guest clock: 1763674431.947225717
	I1120 21:33:51.985737   38602 fix.go:229] Guest: 2025-11-20 21:33:51.947225717 +0000 UTC Remote: 2025-11-20 21:33:51.876181454 +0000 UTC m=+20.292703490 (delta=71.044263ms)
	I1120 21:33:51.985756   38602 fix.go:200] guest clock delta is within tolerance: 71.044263ms
	I1120 21:33:51.985763   38602 start.go:83] releasing machines lock for "test-preload-787681", held for 17.739661385s
	I1120 21:33:51.988998   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.989366   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.989392   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.989941   38602 ssh_runner.go:195] Run: cat /version.json
	I1120 21:33:51.990037   38602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:33:51.992874   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.993143   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.993327   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.993359   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.993547   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:51.993551   38602 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/test-preload-787681/id_rsa Username:docker}
	I1120 21:33:51.993579   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:51.993763   38602 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/test-preload-787681/id_rsa Username:docker}
	I1120 21:33:52.071237   38602 ssh_runner.go:195] Run: systemctl --version
	I1120 21:33:52.099486   38602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:33:52.247392   38602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:33:52.255137   38602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:33:52.255199   38602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:33:52.276514   38602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:33:52.276547   38602 start.go:496] detecting cgroup driver to use...
	I1120 21:33:52.276621   38602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:33:52.298112   38602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:33:52.317298   38602 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:33:52.317375   38602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:33:52.345763   38602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:33:52.363752   38602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:33:52.521901   38602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:33:52.754633   38602 docker.go:234] disabling docker service ...
	I1120 21:33:52.754706   38602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:33:52.772967   38602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:33:52.789587   38602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:33:52.968111   38602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:33:53.121397   38602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:33:53.138617   38602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:33:53.162337   38602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1120 21:33:53.162402   38602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:33:53.175823   38602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:33:53.175921   38602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:33:53.189328   38602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:33:53.202367   38602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:33:53.215793   38602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:33:53.229571   38602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:33:53.242597   38602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:33:53.264230   38602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:33:53.277503   38602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:33:53.289044   38602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1120 21:33:53.289110   38602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1120 21:33:53.311411   38602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:33:53.324417   38602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:33:53.474616   38602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:33:53.597152   38602 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:33:53.597249   38602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:33:53.602844   38602 start.go:564] Will wait 60s for crictl version
	I1120 21:33:53.602923   38602 ssh_runner.go:195] Run: which crictl
	I1120 21:33:53.607295   38602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 21:33:53.644543   38602 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 21:33:53.644627   38602 ssh_runner.go:195] Run: crio --version
	I1120 21:33:53.675712   38602 ssh_runner.go:195] Run: crio --version
	I1120 21:33:53.707380   38602 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1120 21:33:53.711200   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:53.711598   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:33:53.711621   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:33:53.711794   38602 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1120 21:33:53.716621   38602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:33:53.732000   38602 kubeadm.go:884] updating cluster {Name:test-preload-787681 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-787681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:33:53.732121   38602 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1120 21:33:53.732189   38602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:33:53.771323   38602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1120 21:33:53.771406   38602 ssh_runner.go:195] Run: which lz4
	I1120 21:33:53.776017   38602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1120 21:33:53.780900   38602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1120 21:33:53.780938   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1120 21:33:55.388570   38602 crio.go:462] duration metric: took 1.612581883s to copy over tarball
	I1120 21:33:55.388643   38602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1120 21:33:57.170142   38602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.781470443s)
	I1120 21:33:57.170179   38602 crio.go:469] duration metric: took 1.781577762s to extract the tarball
	I1120 21:33:57.170189   38602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1120 21:33:57.211565   38602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:33:57.249798   38602 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:33:57.249823   38602 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:33:57.249831   38602 kubeadm.go:935] updating node { 192.168.39.223 8443 v1.32.0 crio true true} ...
	I1120 21:33:57.249943   38602 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-787681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-787681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:33:57.250016   38602 ssh_runner.go:195] Run: crio config
	I1120 21:33:57.297911   38602 cni.go:84] Creating CNI manager for ""
	I1120 21:33:57.297943   38602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:33:57.297965   38602 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:33:57.298010   38602 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-787681 NodeName:test-preload-787681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:33:57.298151   38602 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-787681"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.223"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:33:57.298232   38602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1120 21:33:57.311291   38602 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:33:57.311352   38602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:33:57.323797   38602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1120 21:33:57.344546   38602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:33:57.366052   38602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1120 21:33:57.388578   38602 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I1120 21:33:57.393007   38602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:33:57.408982   38602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:33:57.557873   38602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:33:57.579138   38602 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681 for IP: 192.168.39.223
	I1120 21:33:57.579164   38602 certs.go:195] generating shared ca certs ...
	I1120 21:33:57.579180   38602 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:33:57.579375   38602 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 21:33:57.579435   38602 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 21:33:57.579449   38602 certs.go:257] generating profile certs ...
	I1120 21:33:57.579559   38602 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/client.key
	I1120 21:33:57.579625   38602 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/apiserver.key.3669a3a0
	I1120 21:33:57.579671   38602 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/proxy-client.key
	I1120 21:33:57.579813   38602 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem (1338 bytes)
	W1120 21:33:57.579866   38602 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706_empty.pem, impossibly tiny 0 bytes
	I1120 21:33:57.579882   38602 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 21:33:57.579930   38602 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:33:57.579961   38602 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:33:57.579993   38602 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 21:33:57.580048   38602 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:33:57.580602   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:33:57.624106   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:33:57.659112   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:33:57.695215   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:33:57.726913   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1120 21:33:57.759749   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:33:57.790530   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:33:57.822063   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:33:57.852053   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:33:57.884223   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem --> /usr/share/ca-certificates/7706.pem (1338 bytes)
	I1120 21:33:57.916706   38602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /usr/share/ca-certificates/77062.pem (1708 bytes)
	I1120 21:33:57.949172   38602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:33:57.973162   38602 ssh_runner.go:195] Run: openssl version
	I1120 21:33:57.980400   38602 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7706.pem
	I1120 21:33:57.993634   38602 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7706.pem /etc/ssl/certs/7706.pem
	I1120 21:33:58.007139   38602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7706.pem
	I1120 21:33:58.013195   38602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:36 /usr/share/ca-certificates/7706.pem
	I1120 21:33:58.013290   38602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7706.pem
	I1120 21:33:58.021420   38602 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:33:58.034043   38602 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7706.pem /etc/ssl/certs/51391683.0
	I1120 21:33:58.046402   38602 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77062.pem
	I1120 21:33:58.059583   38602 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77062.pem /etc/ssl/certs/77062.pem
	I1120 21:33:58.071749   38602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77062.pem
	I1120 21:33:58.078114   38602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:36 /usr/share/ca-certificates/77062.pem
	I1120 21:33:58.078190   38602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77062.pem
	I1120 21:33:58.086127   38602 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:33:58.099000   38602 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/77062.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:33:58.111502   38602 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:33:58.124021   38602 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:33:58.136960   38602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:33:58.142573   38602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:33:58.142630   38602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:33:58.150136   38602 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:33:58.162258   38602 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:33:58.174958   38602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:33:58.180788   38602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:33:58.188841   38602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:33:58.196785   38602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:33:58.204772   38602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:33:58.212701   38602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:33:58.220679   38602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:33:58.228249   38602 kubeadm.go:401] StartCluster: {Name:test-preload-787681 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-787681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:33:58.228368   38602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:33:58.228448   38602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:33:58.263376   38602 cri.go:89] found id: ""
	I1120 21:33:58.263457   38602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:33:58.278763   38602 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:33:58.278787   38602 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:33:58.278843   38602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:33:58.292288   38602 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:33:58.292745   38602 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-787681" does not appear in /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:33:58.292875   38602 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-3793/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-787681" cluster setting kubeconfig missing "test-preload-787681" context setting]
	I1120 21:33:58.293130   38602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:33:58.293659   38602 kapi.go:59] client config for test-preload-787681: &rest.Config{Host:"https://192.168.39.223:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/client.key", CAFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:33:58.294021   38602 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:33:58.294035   38602 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:33:58.294040   38602 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:33:58.294044   38602 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:33:58.294047   38602 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:33:58.294457   38602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:33:58.312521   38602 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.223
	I1120 21:33:58.312556   38602 kubeadm.go:1161] stopping kube-system containers ...
	I1120 21:33:58.312568   38602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1120 21:33:58.312632   38602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:33:58.359062   38602 cri.go:89] found id: ""
	I1120 21:33:58.359147   38602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1120 21:33:58.380329   38602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:33:58.392706   38602 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:33:58.392732   38602 kubeadm.go:158] found existing configuration files:
	
	I1120 21:33:58.392789   38602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:33:58.404716   38602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:33:58.404796   38602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:33:58.418096   38602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:33:58.430326   38602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:33:58.430396   38602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:33:58.442751   38602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:33:58.453690   38602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:33:58.453744   38602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:33:58.465815   38602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:33:58.477354   38602 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:33:58.477426   38602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:33:58.489740   38602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:33:58.501818   38602 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1120 21:33:58.561355   38602 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1120 21:33:59.799311   38602 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.237918346s)
	I1120 21:33:59.799381   38602 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1120 21:34:00.054591   38602 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1120 21:34:00.122803   38602 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1120 21:34:00.188621   38602 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:34:00.188718   38602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:34:00.689025   38602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:34:01.189239   38602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:34:01.689724   38602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:34:02.189893   38602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:34:02.689094   38602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:34:02.720840   38602 api_server.go:72] duration metric: took 2.532233462s to wait for apiserver process to appear ...
	I1120 21:34:02.720941   38602 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:34:02.720974   38602 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1120 21:34:05.058927   38602 api_server.go:279] https://192.168.39.223:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:34:05.058954   38602 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:34:05.058969   38602 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1120 21:34:05.078537   38602 api_server.go:279] https://192.168.39.223:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:34:05.078570   38602 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:34:05.222019   38602 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1120 21:34:05.265827   38602 api_server.go:279] https://192.168.39.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:34:05.265902   38602 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:34:05.721643   38602 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1120 21:34:05.726611   38602 api_server.go:279] https://192.168.39.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:34:05.726645   38602 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:34:06.221318   38602 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1120 21:34:06.225735   38602 api_server.go:279] https://192.168.39.223:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:34:06.225766   38602 api_server.go:103] status: https://192.168.39.223:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:34:06.721287   38602 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1120 21:34:06.725634   38602 api_server.go:279] https://192.168.39.223:8443/healthz returned 200:
	ok
	I1120 21:34:06.732226   38602 api_server.go:141] control plane version: v1.32.0
	I1120 21:34:06.732249   38602 api_server.go:131] duration metric: took 4.011298956s to wait for apiserver health ...
	I1120 21:34:06.732264   38602 cni.go:84] Creating CNI manager for ""
	I1120 21:34:06.732273   38602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:34:06.733819   38602 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1120 21:34:06.735075   38602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1120 21:34:06.748614   38602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1120 21:34:06.776786   38602 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:34:06.783738   38602 system_pods.go:59] 7 kube-system pods found
	I1120 21:34:06.783799   38602 system_pods.go:61] "coredns-668d6bf9bc-psng5" [87f4e339-7528-4360-8d05-9fd3b00685c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:34:06.783814   38602 system_pods.go:61] "etcd-test-preload-787681" [bd3bb8f3-dfb3-447d-a7a4-037db5ea064e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:34:06.783826   38602 system_pods.go:61] "kube-apiserver-test-preload-787681" [4a44c926-26cb-4847-80a1-edee59125190] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:34:06.783839   38602 system_pods.go:61] "kube-controller-manager-test-preload-787681" [622e1996-e391-419d-8a4d-0bb00041bef3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:34:06.783865   38602 system_pods.go:61] "kube-proxy-ks59b" [79dbc5f8-63fb-4006-892f-56349d8b3920] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:34:06.783878   38602 system_pods.go:61] "kube-scheduler-test-preload-787681" [c175d339-a6a6-40d7-9323-71f20ab5d357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:34:06.783891   38602 system_pods.go:61] "storage-provisioner" [120ea91a-9ce8-40d2-9344-ec34430745d0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:34:06.783903   38602 system_pods.go:74] duration metric: took 7.094226ms to wait for pod list to return data ...
	I1120 21:34:06.783918   38602 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:34:06.788304   38602 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1120 21:34:06.788340   38602 node_conditions.go:123] node cpu capacity is 2
	I1120 21:34:06.788356   38602 node_conditions.go:105] duration metric: took 4.429078ms to run NodePressure ...
	I1120 21:34:06.788424   38602 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1120 21:34:07.051732   38602 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1120 21:34:07.056230   38602 kubeadm.go:744] kubelet initialised
	I1120 21:34:07.056249   38602 kubeadm.go:745] duration metric: took 4.49558ms waiting for restarted kubelet to initialise ...
	I1120 21:34:07.056264   38602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:34:07.078994   38602 ops.go:34] apiserver oom_adj: -16
	I1120 21:34:07.079015   38602 kubeadm.go:602] duration metric: took 8.800222798s to restartPrimaryControlPlane
	I1120 21:34:07.079024   38602 kubeadm.go:403] duration metric: took 8.850783674s to StartCluster
	I1120 21:34:07.079039   38602 settings.go:142] acquiring lock: {Name:mke92973c8f33ef32fe11f7b266adf74cd3ec47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:34:07.079119   38602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:34:07.079657   38602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:34:07.079945   38602 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:34:07.080016   38602 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:34:07.080113   38602 addons.go:70] Setting storage-provisioner=true in profile "test-preload-787681"
	I1120 21:34:07.080132   38602 addons.go:239] Setting addon storage-provisioner=true in "test-preload-787681"
	W1120 21:34:07.080140   38602 addons.go:248] addon storage-provisioner should already be in state true
	I1120 21:34:07.080130   38602 addons.go:70] Setting default-storageclass=true in profile "test-preload-787681"
	I1120 21:34:07.080168   38602 host.go:66] Checking if "test-preload-787681" exists ...
	I1120 21:34:07.080181   38602 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-787681"
	I1120 21:34:07.080191   38602 config.go:182] Loaded profile config "test-preload-787681": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1120 21:34:07.081533   38602 out.go:179] * Verifying Kubernetes components...
	I1120 21:34:07.082528   38602 kapi.go:59] client config for test-preload-787681: &rest.Config{Host:"https://192.168.39.223:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/client.key", CAFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:34:07.082756   38602 addons.go:239] Setting addon default-storageclass=true in "test-preload-787681"
	W1120 21:34:07.082767   38602 addons.go:248] addon default-storageclass should already be in state true
	I1120 21:34:07.082784   38602 host.go:66] Checking if "test-preload-787681" exists ...
	I1120 21:34:07.082893   38602 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:34:07.082944   38602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:34:07.084088   38602 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:34:07.084103   38602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:34:07.084238   38602 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:34:07.084252   38602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:34:07.086941   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:34:07.087330   38602 main.go:143] libmachine: domain test-preload-787681 has defined MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:34:07.087381   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:34:07.087414   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:34:07.087573   38602 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/test-preload-787681/id_rsa Username:docker}
	I1120 21:34:07.087876   38602 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cd:15:ff", ip: ""} in network mk-test-preload-787681: {Iface:virbr1 ExpiryTime:2025-11-20 22:33:46 +0000 UTC Type:0 Mac:52:54:00:cd:15:ff Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:test-preload-787681 Clientid:01:52:54:00:cd:15:ff}
	I1120 21:34:07.087912   38602 main.go:143] libmachine: domain test-preload-787681 has defined IP address 192.168.39.223 and MAC address 52:54:00:cd:15:ff in network mk-test-preload-787681
	I1120 21:34:07.088113   38602 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/test-preload-787681/id_rsa Username:docker}
	I1120 21:34:07.459599   38602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:34:07.496593   38602 node_ready.go:35] waiting up to 6m0s for node "test-preload-787681" to be "Ready" ...
	I1120 21:34:07.515528   38602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:34:07.556555   38602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:34:08.376860   38602 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1120 21:34:08.378010   38602 addons.go:515] duration metric: took 1.297999663s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1120 21:34:09.501467   38602 node_ready.go:57] node "test-preload-787681" has "Ready":"False" status (will retry)
	W1120 21:34:12.001585   38602 node_ready.go:57] node "test-preload-787681" has "Ready":"False" status (will retry)
	W1120 21:34:14.500416   38602 node_ready.go:57] node "test-preload-787681" has "Ready":"False" status (will retry)
	I1120 21:34:15.999700   38602 node_ready.go:49] node "test-preload-787681" is "Ready"
	I1120 21:34:15.999750   38602 node_ready.go:38] duration metric: took 8.503076141s for node "test-preload-787681" to be "Ready" ...
	I1120 21:34:15.999768   38602 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:34:15.999834   38602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:34:16.021762   38602 api_server.go:72] duration metric: took 8.9417824s to wait for apiserver process to appear ...
	I1120 21:34:16.021791   38602 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:34:16.021808   38602 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1120 21:34:16.027683   38602 api_server.go:279] https://192.168.39.223:8443/healthz returned 200:
	ok
	I1120 21:34:16.028828   38602 api_server.go:141] control plane version: v1.32.0
	I1120 21:34:16.028872   38602 api_server.go:131] duration metric: took 7.072843ms to wait for apiserver health ...
	I1120 21:34:16.028888   38602 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:34:16.032027   38602 system_pods.go:59] 7 kube-system pods found
	I1120 21:34:16.032054   38602 system_pods.go:61] "coredns-668d6bf9bc-psng5" [87f4e339-7528-4360-8d05-9fd3b00685c5] Running
	I1120 21:34:16.032065   38602 system_pods.go:61] "etcd-test-preload-787681" [bd3bb8f3-dfb3-447d-a7a4-037db5ea064e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:34:16.032075   38602 system_pods.go:61] "kube-apiserver-test-preload-787681" [4a44c926-26cb-4847-80a1-edee59125190] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:34:16.032085   38602 system_pods.go:61] "kube-controller-manager-test-preload-787681" [622e1996-e391-419d-8a4d-0bb00041bef3] Running
	I1120 21:34:16.032096   38602 system_pods.go:61] "kube-proxy-ks59b" [79dbc5f8-63fb-4006-892f-56349d8b3920] Running
	I1120 21:34:16.032100   38602 system_pods.go:61] "kube-scheduler-test-preload-787681" [c175d339-a6a6-40d7-9323-71f20ab5d357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:34:16.032104   38602 system_pods.go:61] "storage-provisioner" [120ea91a-9ce8-40d2-9344-ec34430745d0] Running
	I1120 21:34:16.032110   38602 system_pods.go:74] duration metric: took 3.216389ms to wait for pod list to return data ...
	I1120 21:34:16.032118   38602 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:34:16.034708   38602 default_sa.go:45] found service account: "default"
	I1120 21:34:16.034728   38602 default_sa.go:55] duration metric: took 2.6042ms for default service account to be created ...
	I1120 21:34:16.034737   38602 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:34:16.037312   38602 system_pods.go:86] 7 kube-system pods found
	I1120 21:34:16.037336   38602 system_pods.go:89] "coredns-668d6bf9bc-psng5" [87f4e339-7528-4360-8d05-9fd3b00685c5] Running
	I1120 21:34:16.037346   38602 system_pods.go:89] "etcd-test-preload-787681" [bd3bb8f3-dfb3-447d-a7a4-037db5ea064e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:34:16.037352   38602 system_pods.go:89] "kube-apiserver-test-preload-787681" [4a44c926-26cb-4847-80a1-edee59125190] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:34:16.037360   38602 system_pods.go:89] "kube-controller-manager-test-preload-787681" [622e1996-e391-419d-8a4d-0bb00041bef3] Running
	I1120 21:34:16.037366   38602 system_pods.go:89] "kube-proxy-ks59b" [79dbc5f8-63fb-4006-892f-56349d8b3920] Running
	I1120 21:34:16.037374   38602 system_pods.go:89] "kube-scheduler-test-preload-787681" [c175d339-a6a6-40d7-9323-71f20ab5d357] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:34:16.037377   38602 system_pods.go:89] "storage-provisioner" [120ea91a-9ce8-40d2-9344-ec34430745d0] Running
	I1120 21:34:16.037386   38602 system_pods.go:126] duration metric: took 2.643962ms to wait for k8s-apps to be running ...
	I1120 21:34:16.037394   38602 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:34:16.037432   38602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:34:16.055363   38602 system_svc.go:56] duration metric: took 17.959334ms WaitForService to wait for kubelet
	I1120 21:34:16.055394   38602 kubeadm.go:587] duration metric: took 8.975422625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:34:16.055413   38602 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:34:16.058911   38602 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1120 21:34:16.058932   38602 node_conditions.go:123] node cpu capacity is 2
	I1120 21:34:16.058942   38602 node_conditions.go:105] duration metric: took 3.524841ms to run NodePressure ...
	I1120 21:34:16.058953   38602 start.go:242] waiting for startup goroutines ...
	I1120 21:34:16.058959   38602 start.go:247] waiting for cluster config update ...
	I1120 21:34:16.058969   38602 start.go:256] writing updated cluster config ...
	I1120 21:34:16.059228   38602 ssh_runner.go:195] Run: rm -f paused
	I1120 21:34:16.064957   38602 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:34:16.065632   38602 kapi.go:59] client config for test-preload-787681: &rest.Config{Host:"https://192.168.39.223:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/profiles/test-preload-787681/client.key", CAFile:"/home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:34:16.068757   38602 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-psng5" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:16.073000   38602 pod_ready.go:94] pod "coredns-668d6bf9bc-psng5" is "Ready"
	I1120 21:34:16.073028   38602 pod_ready.go:86] duration metric: took 4.241807ms for pod "coredns-668d6bf9bc-psng5" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:16.075163   38602 pod_ready.go:83] waiting for pod "etcd-test-preload-787681" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:34:18.081450   38602 pod_ready.go:104] pod "etcd-test-preload-787681" is not "Ready", error: <nil>
	W1120 21:34:20.081733   38602 pod_ready.go:104] pod "etcd-test-preload-787681" is not "Ready", error: <nil>
	I1120 21:34:21.581443   38602 pod_ready.go:94] pod "etcd-test-preload-787681" is "Ready"
	I1120 21:34:21.581481   38602 pod_ready.go:86] duration metric: took 5.506295331s for pod "etcd-test-preload-787681" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:21.588604   38602 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-787681" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:21.594185   38602 pod_ready.go:94] pod "kube-apiserver-test-preload-787681" is "Ready"
	I1120 21:34:21.594220   38602 pod_ready.go:86] duration metric: took 5.58242ms for pod "kube-apiserver-test-preload-787681" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:21.597379   38602 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-787681" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:21.601760   38602 pod_ready.go:94] pod "kube-controller-manager-test-preload-787681" is "Ready"
	I1120 21:34:21.601793   38602 pod_ready.go:86] duration metric: took 4.387308ms for pod "kube-controller-manager-test-preload-787681" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:21.604731   38602 pod_ready.go:83] waiting for pod "kube-proxy-ks59b" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:21.780308   38602 pod_ready.go:94] pod "kube-proxy-ks59b" is "Ready"
	I1120 21:34:21.780348   38602 pod_ready.go:86] duration metric: took 175.585987ms for pod "kube-proxy-ks59b" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:21.979314   38602 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-787681" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:22.378941   38602 pod_ready.go:94] pod "kube-scheduler-test-preload-787681" is "Ready"
	I1120 21:34:22.378966   38602 pod_ready.go:86] duration metric: took 399.628001ms for pod "kube-scheduler-test-preload-787681" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:34:22.378992   38602 pod_ready.go:40] duration metric: took 6.314008097s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:34:22.423499   38602 start.go:628] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1120 21:34:22.425357   38602 out.go:203] 
	W1120 21:34:22.426581   38602 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1120 21:34:22.427775   38602 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1120 21:34:22.429220   38602 out.go:179] * Done! kubectl is now configured to use "test-preload-787681" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.197848597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763674463197821318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd5aa58f-7bf5-426b-9c15-4349b89a3680 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.199214346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=189fbef6-0925-4446-afb5-1e5dda838d63 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.199386214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=189fbef6-0925-4446-afb5-1e5dda838d63 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.200113818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4878fe266e8fbed639008baa9b2cd7befd838ef4cd79d14f0f21781334f04d3e,PodSandboxId:8c827cce8350c3f3c31c8c362db9f743624ff64819339ba6f303db6e493655bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763674453258327776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-psng5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f4e339-7528-4360-8d05-9fd3b00685c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a12345af736f6decbcfef5361fc497ee243ec420644bbe70370f3e540b6114,PodSandboxId:9d467e615defde321d30885976f0614a572e75c0b2f7db816dbd25bd909c6e53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763674447055174789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ks59b,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 79dbc5f8-63fb-4006-892f-56349d8b3920,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62cb606aff4267275bc837adb75641bf185bc4b1872d94d4a631743afadc91e6,PodSandboxId:6f6a0fd427c2f7167afafbcc1ccfd2dbd30918deee293ed170f76868687ab455,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763674446448202914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12
0ea91a-9ce8-40d2-9344-ec34430745d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edbe418b06a38fdd6e2d8855c5ac22873f45cc61b3d7faaa4cc0f71ca6f4c60,PodSandboxId:fc0e608a9accefd3024ab32f40a27277d34d2fd6cfa08e52b6b5036588047a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763674442180113716,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 1bbb5e43d6b760887a2ba868c3e7be79,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6931f991946828199a9002202612e270fb29b9224174ee40de85b8bd222db1,PodSandboxId:5161f6b6ac90a022ece54f1e6743e0a6ebb3d5cc1a1d2b58be16eda47df5d3c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763674442137827572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: fab7b401663b0c9273aa3059c6e98448,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55da2a78f2f87f2ea898b5c2eecb47a96a361ece36c327902de8491c89ca4e5,PodSandboxId:c26f65d580bd937d3de58029bb269912c36184099bd805446aa7ef961d329804,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763674442069355042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 169c530613e9461220857060206b0eaa,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28502151b6a86c60a5cbf58411f71bf520d6dc334b72e118e887d77c6239e3a,PodSandboxId:6c8841436966b57cf67dcffef7d6698e190d1a6b5873c6804bdf8088a6ccdabd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763674442082921777,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fbb667548615db5f6af420c7df4e7e,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=189fbef6-0925-4446-afb5-1e5dda838d63 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.240621021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca033bf3-7d0b-4e12-a4e3-dee9a57325fa name=/runtime.v1.RuntimeService/Version
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.240714875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca033bf3-7d0b-4e12-a4e3-dee9a57325fa name=/runtime.v1.RuntimeService/Version
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.242165104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69ab7d84-1349-42fd-8996-75223d1423d1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.242631556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763674463242558892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69ab7d84-1349-42fd-8996-75223d1423d1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.243757513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97b137c0-f4c6-47fc-887e-435e4a85dac2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.244044960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97b137c0-f4c6-47fc-887e-435e4a85dac2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.244557492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4878fe266e8fbed639008baa9b2cd7befd838ef4cd79d14f0f21781334f04d3e,PodSandboxId:8c827cce8350c3f3c31c8c362db9f743624ff64819339ba6f303db6e493655bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763674453258327776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-psng5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f4e339-7528-4360-8d05-9fd3b00685c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a12345af736f6decbcfef5361fc497ee243ec420644bbe70370f3e540b6114,PodSandboxId:9d467e615defde321d30885976f0614a572e75c0b2f7db816dbd25bd909c6e53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763674447055174789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ks59b,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 79dbc5f8-63fb-4006-892f-56349d8b3920,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62cb606aff4267275bc837adb75641bf185bc4b1872d94d4a631743afadc91e6,PodSandboxId:6f6a0fd427c2f7167afafbcc1ccfd2dbd30918deee293ed170f76868687ab455,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763674446448202914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12
0ea91a-9ce8-40d2-9344-ec34430745d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edbe418b06a38fdd6e2d8855c5ac22873f45cc61b3d7faaa4cc0f71ca6f4c60,PodSandboxId:fc0e608a9accefd3024ab32f40a27277d34d2fd6cfa08e52b6b5036588047a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763674442180113716,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 1bbb5e43d6b760887a2ba868c3e7be79,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6931f991946828199a9002202612e270fb29b9224174ee40de85b8bd222db1,PodSandboxId:5161f6b6ac90a022ece54f1e6743e0a6ebb3d5cc1a1d2b58be16eda47df5d3c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763674442137827572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: fab7b401663b0c9273aa3059c6e98448,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55da2a78f2f87f2ea898b5c2eecb47a96a361ece36c327902de8491c89ca4e5,PodSandboxId:c26f65d580bd937d3de58029bb269912c36184099bd805446aa7ef961d329804,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763674442069355042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 169c530613e9461220857060206b0eaa,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28502151b6a86c60a5cbf58411f71bf520d6dc334b72e118e887d77c6239e3a,PodSandboxId:6c8841436966b57cf67dcffef7d6698e190d1a6b5873c6804bdf8088a6ccdabd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763674442082921777,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fbb667548615db5f6af420c7df4e7e,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97b137c0-f4c6-47fc-887e-435e4a85dac2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.281109576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c02ebff0-e6b6-4636-8c7b-9f8ddaf4014f name=/runtime.v1.RuntimeService/Version
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.281203123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c02ebff0-e6b6-4636-8c7b-9f8ddaf4014f name=/runtime.v1.RuntimeService/Version
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.283223173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dddd2ee-773e-4855-9d70-5eafaa5123ce name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.283670383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763674463283647046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dddd2ee-773e-4855-9d70-5eafaa5123ce name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.285372159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cadd5b03-1f21-4868-8a5e-c106b97e4796 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.285528655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cadd5b03-1f21-4868-8a5e-c106b97e4796 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.286185981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4878fe266e8fbed639008baa9b2cd7befd838ef4cd79d14f0f21781334f04d3e,PodSandboxId:8c827cce8350c3f3c31c8c362db9f743624ff64819339ba6f303db6e493655bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763674453258327776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-psng5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f4e339-7528-4360-8d05-9fd3b00685c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a12345af736f6decbcfef5361fc497ee243ec420644bbe70370f3e540b6114,PodSandboxId:9d467e615defde321d30885976f0614a572e75c0b2f7db816dbd25bd909c6e53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763674447055174789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ks59b,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 79dbc5f8-63fb-4006-892f-56349d8b3920,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62cb606aff4267275bc837adb75641bf185bc4b1872d94d4a631743afadc91e6,PodSandboxId:6f6a0fd427c2f7167afafbcc1ccfd2dbd30918deee293ed170f76868687ab455,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763674446448202914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12
0ea91a-9ce8-40d2-9344-ec34430745d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edbe418b06a38fdd6e2d8855c5ac22873f45cc61b3d7faaa4cc0f71ca6f4c60,PodSandboxId:fc0e608a9accefd3024ab32f40a27277d34d2fd6cfa08e52b6b5036588047a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763674442180113716,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 1bbb5e43d6b760887a2ba868c3e7be79,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6931f991946828199a9002202612e270fb29b9224174ee40de85b8bd222db1,PodSandboxId:5161f6b6ac90a022ece54f1e6743e0a6ebb3d5cc1a1d2b58be16eda47df5d3c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763674442137827572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: fab7b401663b0c9273aa3059c6e98448,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55da2a78f2f87f2ea898b5c2eecb47a96a361ece36c327902de8491c89ca4e5,PodSandboxId:c26f65d580bd937d3de58029bb269912c36184099bd805446aa7ef961d329804,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763674442069355042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 169c530613e9461220857060206b0eaa,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28502151b6a86c60a5cbf58411f71bf520d6dc334b72e118e887d77c6239e3a,PodSandboxId:6c8841436966b57cf67dcffef7d6698e190d1a6b5873c6804bdf8088a6ccdabd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763674442082921777,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fbb667548615db5f6af420c7df4e7e,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cadd5b03-1f21-4868-8a5e-c106b97e4796 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.324050264Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c742773b-2420-4272-8bd8-2d82e740f70f name=/runtime.v1.RuntimeService/Version
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.324138937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c742773b-2420-4272-8bd8-2d82e740f70f name=/runtime.v1.RuntimeService/Version
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.329013711Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bd52f57-97d2-44a3-8786-ea23559ce52e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.329862647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763674463329837365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bd52f57-97d2-44a3-8786-ea23559ce52e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.331468258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ce808b2-15ad-482e-a1ea-6efa72032550 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.331671343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ce808b2-15ad-482e-a1ea-6efa72032550 name=/runtime.v1.RuntimeService/ListContainers
	Nov 20 21:34:23 test-preload-787681 crio[843]: time="2025-11-20 21:34:23.331911608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4878fe266e8fbed639008baa9b2cd7befd838ef4cd79d14f0f21781334f04d3e,PodSandboxId:8c827cce8350c3f3c31c8c362db9f743624ff64819339ba6f303db6e493655bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763674453258327776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-psng5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f4e339-7528-4360-8d05-9fd3b00685c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a12345af736f6decbcfef5361fc497ee243ec420644bbe70370f3e540b6114,PodSandboxId:9d467e615defde321d30885976f0614a572e75c0b2f7db816dbd25bd909c6e53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763674447055174789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ks59b,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 79dbc5f8-63fb-4006-892f-56349d8b3920,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62cb606aff4267275bc837adb75641bf185bc4b1872d94d4a631743afadc91e6,PodSandboxId:6f6a0fd427c2f7167afafbcc1ccfd2dbd30918deee293ed170f76868687ab455,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763674446448202914,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12
0ea91a-9ce8-40d2-9344-ec34430745d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edbe418b06a38fdd6e2d8855c5ac22873f45cc61b3d7faaa4cc0f71ca6f4c60,PodSandboxId:fc0e608a9accefd3024ab32f40a27277d34d2fd6cfa08e52b6b5036588047a81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763674442180113716,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 1bbb5e43d6b760887a2ba868c3e7be79,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a6931f991946828199a9002202612e270fb29b9224174ee40de85b8bd222db1,PodSandboxId:5161f6b6ac90a022ece54f1e6743e0a6ebb3d5cc1a1d2b58be16eda47df5d3c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763674442137827572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: fab7b401663b0c9273aa3059c6e98448,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55da2a78f2f87f2ea898b5c2eecb47a96a361ece36c327902de8491c89ca4e5,PodSandboxId:c26f65d580bd937d3de58029bb269912c36184099bd805446aa7ef961d329804,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763674442069355042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 169c530613e9461220857060206b0eaa,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28502151b6a86c60a5cbf58411f71bf520d6dc334b72e118e887d77c6239e3a,PodSandboxId:6c8841436966b57cf67dcffef7d6698e190d1a6b5873c6804bdf8088a6ccdabd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763674442082921777,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-787681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20fbb667548615db5f6af420c7df4e7e,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ce808b2-15ad-482e-a1ea-6efa72032550 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	4878fe266e8fb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago      Running             coredns                   1                   8c827cce8350c       coredns-668d6bf9bc-psng5                      kube-system
	79a12345af736       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   9d467e615defd       kube-proxy-ks59b                              kube-system
	62cb606aff426       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   6f6a0fd427c2f       storage-provisioner                           kube-system
	7edbe418b06a3       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   21 seconds ago      Running             kube-controller-manager   1                   fc0e608a9acce       kube-controller-manager-test-preload-787681   kube-system
	6a6931f991946       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   21 seconds ago      Running             kube-apiserver            1                   5161f6b6ac90a       kube-apiserver-test-preload-787681            kube-system
	e28502151b6a8       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   21 seconds ago      Running             kube-scheduler            1                   6c8841436966b       kube-scheduler-test-preload-787681            kube-system
	a55da2a78f2f8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      1                   c26f65d580bd9       etcd-test-preload-787681                      kube-system
	
	
	==> coredns [4878fe266e8fbed639008baa9b2cd7befd838ef4cd79d14f0f21781334f04d3e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50234 - 15891 "HINFO IN 791781014786106592.3339095322851963434. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.050404785s
	
	
	==> describe nodes <==
	Name:               test-preload-787681
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-787681
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=test-preload-787681
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_33_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:33:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-787681
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:34:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:34:15 +0000   Thu, 20 Nov 2025 21:33:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:34:15 +0000   Thu, 20 Nov 2025 21:33:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:34:15 +0000   Thu, 20 Nov 2025 21:33:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:34:15 +0000   Thu, 20 Nov 2025 21:34:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.223
	  Hostname:    test-preload-787681
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f5035eba31b4dbdb091f5869427f1ba
	  System UUID:                3f5035eb-a31b-4dbd-b091-f5869427f1ba
	  Boot ID:                    5f5f97ca-1030-4d34-b59d-33e0a6fff1b2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-psng5                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     73s
	  kube-system                 etcd-test-preload-787681                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         78s
	  kube-system                 kube-apiserver-test-preload-787681             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-test-preload-787681    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-ks59b                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-test-preload-787681             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 72s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  78s                kubelet          Node test-preload-787681 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    78s                kubelet          Node test-preload-787681 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s                kubelet          Node test-preload-787681 status is now: NodeHasSufficientPID
	  Normal   Starting                 78s                kubelet          Starting kubelet.
	  Normal   NodeReady                77s                kubelet          Node test-preload-787681 status is now: NodeReady
	  Normal   RegisteredNode           74s                node-controller  Node test-preload-787681 event: Registered Node test-preload-787681 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-787681 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-787681 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-787681 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-787681 has been rebooted, boot id: 5f5f97ca-1030-4d34-b59d-33e0a6fff1b2
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-787681 event: Registered Node test-preload-787681 in Controller
	
	
	==> dmesg <==
	[Nov20 21:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000088] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000097] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.003937] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085913] kauditd_printk_skb: 4 callbacks suppressed
	[Nov20 21:34] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.371325] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000064] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [a55da2a78f2f87f2ea898b5c2eecb47a96a361ece36c327902de8491c89ca4e5] <==
	{"level":"info","ts":"2025-11-20T21:34:02.600340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd switched to configuration voters=(15917118417362859709)"}
	{"level":"info","ts":"2025-11-20T21:34:02.600433Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4eb1782ea0e4b224","local-member-id":"dce4f6de3abdb6bd","added-peer-id":"dce4f6de3abdb6bd","added-peer-peer-urls":["https://192.168.39.223:2380"]}
	{"level":"info","ts":"2025-11-20T21:34:02.600550Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4eb1782ea0e4b224","local-member-id":"dce4f6de3abdb6bd","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:34:02.600594Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:34:02.603648Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T21:34:02.608785Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"dce4f6de3abdb6bd","initial-advertise-peer-urls":["https://192.168.39.223:2380"],"listen-peer-urls":["https://192.168.39.223:2380"],"advertise-client-urls":["https://192.168.39.223:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.223:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T21:34:02.608864Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T21:34:02.604477Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.223:2380"}
	{"level":"info","ts":"2025-11-20T21:34:02.608933Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.223:2380"}
	{"level":"info","ts":"2025-11-20T21:34:03.938030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-20T21:34:03.938088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-20T21:34:03.938126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd received MsgPreVoteResp from dce4f6de3abdb6bd at term 2"}
	{"level":"info","ts":"2025-11-20T21:34:03.938139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became candidate at term 3"}
	{"level":"info","ts":"2025-11-20T21:34:03.938155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd received MsgVoteResp from dce4f6de3abdb6bd at term 3"}
	{"level":"info","ts":"2025-11-20T21:34:03.938163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became leader at term 3"}
	{"level":"info","ts":"2025-11-20T21:34:03.938172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dce4f6de3abdb6bd elected leader dce4f6de3abdb6bd at term 3"}
	{"level":"info","ts":"2025-11-20T21:34:03.940431Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"dce4f6de3abdb6bd","local-member-attributes":"{Name:test-preload-787681 ClientURLs:[https://192.168.39.223:2379]}","request-path":"/0/members/dce4f6de3abdb6bd/attributes","cluster-id":"4eb1782ea0e4b224","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T21:34:03.940572Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:34:03.940848Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:34:03.941253Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T21:34:03.941322Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-20T21:34:03.941625Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-20T21:34:03.942181Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-20T21:34:03.942356Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T21:34:03.943726Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.223:2379"}
	
	
	==> kernel <==
	 21:34:23 up 0 min,  0 users,  load average: 1.11, 0.31, 0.10
	Linux test-preload-787681 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6a6931f991946828199a9002202612e270fb29b9224174ee40de85b8bd222db1] <==
	I1120 21:34:05.123605       1 aggregator.go:171] initial CRD sync complete...
	I1120 21:34:05.123635       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 21:34:05.123641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:34:05.137545       1 shared_informer.go:320] Caches are synced for configmaps
	I1120 21:34:05.194151       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1120 21:34:05.194201       1 policy_source.go:240] refreshing policies
	I1120 21:34:05.223828       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1120 21:34:05.230533       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1120 21:34:05.230586       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:34:05.230593       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:34:05.230675       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:34:05.230765       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:34:05.245223       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:34:05.246151       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:34:05.249407       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:34:05.265316       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1120 21:34:05.292764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1120 21:34:06.030809       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:34:06.879360       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1120 21:34:06.922788       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1120 21:34:06.968802       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:34:06.987866       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:34:08.477483       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:34:08.728407       1 controller.go:615] quota admission added evaluator for: endpoints
	I1120 21:34:08.780525       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7edbe418b06a38fdd6e2d8855c5ac22873f45cc61b3d7faaa4cc0f71ca6f4c60] <==
	I1120 21:34:08.427793       1 shared_informer.go:320] Caches are synced for endpoint
	I1120 21:34:08.427858       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1120 21:34:08.427918       1 shared_informer.go:320] Caches are synced for TTL
	I1120 21:34:08.427924       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1120 21:34:08.430286       1 shared_informer.go:320] Caches are synced for job
	I1120 21:34:08.430430       1 shared_informer.go:320] Caches are synced for garbage collector
	I1120 21:34:08.431406       1 shared_informer.go:320] Caches are synced for namespace
	I1120 21:34:08.437157       1 shared_informer.go:320] Caches are synced for service account
	I1120 21:34:08.437274       1 shared_informer.go:320] Caches are synced for node
	I1120 21:34:08.437323       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:34:08.437373       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:34:08.437160       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1120 21:34:08.437405       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1120 21:34:08.437517       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1120 21:34:08.437666       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-787681"
	I1120 21:34:08.439215       1 shared_informer.go:320] Caches are synced for stateful set
	I1120 21:34:08.445450       1 shared_informer.go:320] Caches are synced for daemon sets
	I1120 21:34:08.792432       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="354.949299ms"
	I1120 21:34:08.792751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="148.738µs"
	I1120 21:34:13.399025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.007µs"
	I1120 21:34:14.389612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.215572ms"
	I1120 21:34:14.389719       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="59.251µs"
	I1120 21:34:15.601368       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-787681"
	I1120 21:34:15.621072       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-787681"
	I1120 21:34:18.380082       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [79a12345af736f6decbcfef5361fc497ee243ec420644bbe70370f3e540b6114] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1120 21:34:07.616886       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1120 21:34:07.645477       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.223"]
	E1120 21:34:07.645556       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:34:07.786621       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1120 21:34:07.786657       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1120 21:34:07.786685       1 server_linux.go:170] "Using iptables Proxier"
	I1120 21:34:07.789835       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:34:07.790673       1 server.go:497] "Version info" version="v1.32.0"
	I1120 21:34:07.790779       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:34:07.796612       1 config.go:199] "Starting service config controller"
	I1120 21:34:07.798814       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1120 21:34:07.801640       1 config.go:105] "Starting endpoint slice config controller"
	I1120 21:34:07.801797       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1120 21:34:07.807782       1 config.go:329] "Starting node config controller"
	I1120 21:34:07.807849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1120 21:34:07.901767       1 shared_informer.go:320] Caches are synced for service config
	I1120 21:34:07.902190       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1120 21:34:07.908927       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e28502151b6a86c60a5cbf58411f71bf520d6dc334b72e118e887d77c6239e3a] <==
	I1120 21:34:02.726117       1 serving.go:386] Generated self-signed cert in-memory
	W1120 21:34:05.071420       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 21:34:05.071465       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 21:34:05.071476       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 21:34:05.071487       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 21:34:05.147138       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1120 21:34:05.147255       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:34:05.154123       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:34:05.154203       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1120 21:34:05.155302       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1120 21:34:05.155420       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W1120 21:34:05.169513       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1120 21:34:05.171331       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1120 21:34:05.188624       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1120 21:34:05.189202       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1120 21:34:05.198655       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1120 21:34:05.198762       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1120 21:34:05.201197       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1120 21:34:05.202458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1120 21:34:05.203123       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1120 21:34:05.203145       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1120 21:34:06.654612       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: E1120 21:34:05.290993    1199 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-787681\" already exists" pod="kube-system/etcd-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: I1120 21:34:05.291102    1199 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: I1120 21:34:05.300448    1199 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: I1120 21:34:05.302593    1199 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: I1120 21:34:05.303033    1199 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: E1120 21:34:05.316080    1199 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-787681\" already exists" pod="kube-system/kube-scheduler-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: I1120 21:34:05.316107    1199 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: E1120 21:34:05.337482    1199 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-787681\" already exists" pod="kube-system/kube-scheduler-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: E1120 21:34:05.338168    1199 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-787681\" already exists" pod="kube-system/kube-apiserver-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: E1120 21:34:05.339556    1199 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-787681\" already exists" pod="kube-system/etcd-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: E1120 21:34:05.339613    1199 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-787681\" already exists" pod="kube-system/kube-apiserver-test-preload-787681"
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: E1120 21:34:05.778732    1199 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 20 21:34:05 test-preload-787681 kubelet[1199]: E1120 21:34:05.778836    1199 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87f4e339-7528-4360-8d05-9fd3b00685c5-config-volume podName:87f4e339-7528-4360-8d05-9fd3b00685c5 nodeName:}" failed. No retries permitted until 2025-11-20 21:34:06.778818342 +0000 UTC m=+6.742159255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/87f4e339-7528-4360-8d05-9fd3b00685c5-config-volume") pod "coredns-668d6bf9bc-psng5" (UID: "87f4e339-7528-4360-8d05-9fd3b00685c5") : object "kube-system"/"coredns" not registered
	Nov 20 21:34:06 test-preload-787681 kubelet[1199]: E1120 21:34:06.279256    1199 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 20 21:34:06 test-preload-787681 kubelet[1199]: E1120 21:34:06.279368    1199 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/79dbc5f8-63fb-4006-892f-56349d8b3920-kube-proxy podName:79dbc5f8-63fb-4006-892f-56349d8b3920 nodeName:}" failed. No retries permitted until 2025-11-20 21:34:06.779351006 +0000 UTC m=+6.742691923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/79dbc5f8-63fb-4006-892f-56349d8b3920-kube-proxy") pod "kube-proxy-ks59b" (UID: "79dbc5f8-63fb-4006-892f-56349d8b3920") : failed to sync configmap cache: timed out waiting for the condition
	Nov 20 21:34:06 test-preload-787681 kubelet[1199]: E1120 21:34:06.785252    1199 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 20 21:34:06 test-preload-787681 kubelet[1199]: E1120 21:34:06.785334    1199 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87f4e339-7528-4360-8d05-9fd3b00685c5-config-volume podName:87f4e339-7528-4360-8d05-9fd3b00685c5 nodeName:}" failed. No retries permitted until 2025-11-20 21:34:08.785313281 +0000 UTC m=+8.748654208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/87f4e339-7528-4360-8d05-9fd3b00685c5-config-volume") pod "coredns-668d6bf9bc-psng5" (UID: "87f4e339-7528-4360-8d05-9fd3b00685c5") : object "kube-system"/"coredns" not registered
	Nov 20 21:34:07 test-preload-787681 kubelet[1199]: E1120 21:34:07.220259    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-psng5" podUID="87f4e339-7528-4360-8d05-9fd3b00685c5"
	Nov 20 21:34:08 test-preload-787681 kubelet[1199]: E1120 21:34:08.799792    1199 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 20 21:34:08 test-preload-787681 kubelet[1199]: E1120 21:34:08.799918    1199 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87f4e339-7528-4360-8d05-9fd3b00685c5-config-volume podName:87f4e339-7528-4360-8d05-9fd3b00685c5 nodeName:}" failed. No retries permitted until 2025-11-20 21:34:12.799886559 +0000 UTC m=+12.763227478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/87f4e339-7528-4360-8d05-9fd3b00685c5-config-volume") pod "coredns-668d6bf9bc-psng5" (UID: "87f4e339-7528-4360-8d05-9fd3b00685c5") : object "kube-system"/"coredns" not registered
	Nov 20 21:34:09 test-preload-787681 kubelet[1199]: E1120 21:34:09.219696    1199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-psng5" podUID="87f4e339-7528-4360-8d05-9fd3b00685c5"
	Nov 20 21:34:10 test-preload-787681 kubelet[1199]: E1120 21:34:10.230037    1199 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763674450228792156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 20 21:34:10 test-preload-787681 kubelet[1199]: E1120 21:34:10.230399    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763674450228792156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 20 21:34:20 test-preload-787681 kubelet[1199]: E1120 21:34:20.231741    1199 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763674460231383921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 20 21:34:20 test-preload-787681 kubelet[1199]: E1120 21:34:20.231771    1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763674460231383921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [62cb606aff4267275bc837adb75641bf185bc4b1872d94d4a631743afadc91e6] <==
	I1120 21:34:06.520613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-787681 -n test-preload-787681
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-787681 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-787681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-787681
--- FAIL: TestPreload (132.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-763370 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-763370 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.354385884s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-763370] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-763370" primary control-plane node in "pause-763370" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-763370" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:43:10.977962   47230 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:43:10.978357   47230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:43:10.978374   47230 out.go:374] Setting ErrFile to fd 2...
	I1120 21:43:10.978382   47230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:43:10.978732   47230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:43:10.979356   47230 out.go:368] Setting JSON to false
	I1120 21:43:10.980490   47230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5141,"bootTime":1763669850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:43:10.980560   47230 start.go:143] virtualization: kvm guest
	I1120 21:43:10.982789   47230 out.go:179] * [pause-763370] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:43:10.984237   47230 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:43:10.984253   47230 notify.go:221] Checking for updates...
	I1120 21:43:10.987663   47230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:43:10.989676   47230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:43:10.990960   47230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 21:43:10.992483   47230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:43:10.993701   47230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:43:10.995499   47230 config.go:182] Loaded profile config "pause-763370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:10.995966   47230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:43:11.038904   47230 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 21:43:11.042306   47230 start.go:309] selected driver: kvm2
	I1120 21:43:11.042331   47230 start.go:930] validating driver "kvm2" against &{Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:11.042534   47230 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:43:11.044027   47230 cni.go:84] Creating CNI manager for ""
	I1120 21:43:11.044103   47230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:43:11.044166   47230 start.go:353] cluster config:
	{Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:11.044376   47230 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:43:11.046877   47230 out.go:179] * Starting "pause-763370" primary control-plane node in "pause-763370" cluster
	I1120 21:43:11.048266   47230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:43:11.048300   47230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:43:11.048308   47230 cache.go:65] Caching tarball of preloaded images
	I1120 21:43:11.048403   47230 preload.go:238] Found /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:43:11.048420   47230 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:43:11.048598   47230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/config.json ...
	I1120 21:43:11.048832   47230 start.go:360] acquireMachinesLock for pause-763370: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 21:43:12.796933   47230 start.go:364] duration metric: took 1.748022714s to acquireMachinesLock for "pause-763370"
	I1120 21:43:12.797011   47230 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:43:12.797027   47230 fix.go:54] fixHost starting: 
	I1120 21:43:12.799576   47230 fix.go:112] recreateIfNeeded on pause-763370: state=Running err=<nil>
	W1120 21:43:12.799612   47230 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:43:12.801238   47230 out.go:252] * Updating the running kvm2 "pause-763370" VM ...
	I1120 21:43:12.801277   47230 machine.go:94] provisionDockerMachine start ...
	I1120 21:43:12.805648   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.806282   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:12.806322   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.806537   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.806831   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:12.806866   47230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:43:12.937431   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-763370
	
	I1120 21:43:12.937483   47230 buildroot.go:166] provisioning hostname "pause-763370"
	I1120 21:43:12.941914   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.942439   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:12.942475   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.942768   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.943104   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:12.943124   47230 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-763370 && echo "pause-763370" | sudo tee /etc/hostname
	I1120 21:43:13.087326   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-763370
	
	I1120 21:43:13.090606   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.091218   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.091270   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.091526   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:13.091814   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:13.091839   47230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-763370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-763370/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-763370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:43:13.219050   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:43:13.219095   47230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 21:43:13.219159   47230 buildroot.go:174] setting up certificates
	I1120 21:43:13.219171   47230 provision.go:84] configureAuth start
	I1120 21:43:13.223070   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.223707   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.223744   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226312   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226704   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.226742   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226930   47230 provision.go:143] copyHostCerts
	I1120 21:43:13.226985   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem, removing ...
	I1120 21:43:13.226998   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem
	I1120 21:43:13.227062   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 21:43:13.227170   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem, removing ...
	I1120 21:43:13.227186   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem
	I1120 21:43:13.227210   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 21:43:13.227267   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem, removing ...
	I1120 21:43:13.227274   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem
	I1120 21:43:13.227293   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 21:43:13.227341   47230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.pause-763370 san=[127.0.0.1 192.168.50.92 localhost minikube pause-763370]
	I1120 21:43:13.394135   47230 provision.go:177] copyRemoteCerts
	I1120 21:43:13.394198   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:43:13.397579   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.398078   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.398103   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.398270   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:13.496052   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:43:13.537847   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1120 21:43:13.591402   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:43:13.631078   47230 provision.go:87] duration metric: took 411.891808ms to configureAuth
	I1120 21:43:13.631111   47230 buildroot.go:189] setting minikube options for container-runtime
	I1120 21:43:13.631393   47230 config.go:182] Loaded profile config "pause-763370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:13.634843   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.635404   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.635444   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.635679   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:13.636000   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:13.636028   47230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:43:19.366295   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:43:19.366321   47230 machine.go:97] duration metric: took 6.565033306s to provisionDockerMachine
	I1120 21:43:19.366334   47230 start.go:293] postStartSetup for "pause-763370" (driver="kvm2")
	I1120 21:43:19.366346   47230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:43:19.366430   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:43:19.370029   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.370516   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.370543   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.370714   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.467003   47230 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:43:19.473573   47230 info.go:137] Remote host: Buildroot 2025.02
	I1120 21:43:19.473609   47230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 21:43:19.473701   47230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 21:43:19.473831   47230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem -> 77062.pem in /etc/ssl/certs
	I1120 21:43:19.474040   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:43:19.494153   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:19.535592   47230 start.go:296] duration metric: took 169.240571ms for postStartSetup
	I1120 21:43:19.535640   47230 fix.go:56] duration metric: took 6.738612108s for fixHost
	I1120 21:43:19.539008   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.539485   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.539520   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.539742   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:19.540068   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:19.540082   47230 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 21:43:19.661922   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763674999.654051230
	
	I1120 21:43:19.661948   47230 fix.go:216] guest clock: 1763674999.654051230
	I1120 21:43:19.661972   47230 fix.go:229] Guest: 2025-11-20 21:43:19.65405123 +0000 UTC Remote: 2025-11-20 21:43:19.535646072 +0000 UTC m=+8.619190318 (delta=118.405158ms)
	I1120 21:43:19.661993   47230 fix.go:200] guest clock delta is within tolerance: 118.405158ms
	I1120 21:43:19.661999   47230 start.go:83] releasing machines lock for "pause-763370", held for 6.86502006s
	I1120 21:43:19.665305   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.665827   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.665871   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.666470   47230 ssh_runner.go:195] Run: cat /version.json
	I1120 21:43:19.666517   47230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:43:19.670623   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.670663   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671158   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.671199   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671213   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.671246   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671589   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.671750   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.757220   47230 ssh_runner.go:195] Run: systemctl --version
	I1120 21:43:19.791329   47230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:43:19.958183   47230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:43:19.972171   47230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:43:19.972256   47230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:43:19.986821   47230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:43:19.986878   47230 start.go:496] detecting cgroup driver to use...
	I1120 21:43:19.986960   47230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:43:20.020155   47230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:43:20.042276   47230 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:43:20.042351   47230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:43:20.075095   47230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:43:20.096418   47230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:43:20.313659   47230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:43:20.553252   47230 docker.go:234] disabling docker service ...
	I1120 21:43:20.553344   47230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:43:20.586764   47230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:43:20.604836   47230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:43:20.829605   47230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:43:21.028720   47230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:43:21.047746   47230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:43:21.075961   47230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:43:21.076021   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.091397   47230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:43:21.091494   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.105351   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.120611   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.139624   47230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:43:21.157792   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.172905   47230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.186929   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.202707   47230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:43:21.217837   47230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:43:21.232547   47230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:21.437520   47230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:43:22.024669   47230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:43:22.024747   47230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:43:22.032424   47230 start.go:564] Will wait 60s for crictl version
	I1120 21:43:22.032500   47230 ssh_runner.go:195] Run: which crictl
	I1120 21:43:22.037409   47230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 21:43:22.077081   47230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 21:43:22.077174   47230 ssh_runner.go:195] Run: crio --version
	I1120 21:43:22.112251   47230 ssh_runner.go:195] Run: crio --version
	I1120 21:43:22.147198   47230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1120 21:43:22.151588   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:22.152255   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:22.152291   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:22.152619   47230 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1120 21:43:22.157982   47230 kubeadm.go:884] updating cluster {Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:43:22.158171   47230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:43:22.158223   47230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:22.211591   47230 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:43:22.211614   47230 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:43:22.211680   47230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:22.247690   47230 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:43:22.247712   47230 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:43:22.247719   47230 kubeadm.go:935] updating node { 192.168.50.92 8443 v1.34.1 crio true true} ...
	I1120 21:43:22.247814   47230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-763370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:43:22.247893   47230 ssh_runner.go:195] Run: crio config
	I1120 21:43:22.302915   47230 cni.go:84] Creating CNI manager for ""
	I1120 21:43:22.302938   47230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:43:22.302952   47230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:43:22.302972   47230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.92 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-763370 NodeName:pause-763370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:43:22.303099   47230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-763370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:43:22.303169   47230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:43:22.318421   47230 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:43:22.318491   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:43:22.332429   47230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1120 21:43:22.355454   47230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:43:22.381174   47230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1120 21:43:22.404131   47230 ssh_runner.go:195] Run: grep 192.168.50.92	control-plane.minikube.internal$ /etc/hosts
	I1120 21:43:22.409397   47230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:22.580909   47230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:43:22.602545   47230 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370 for IP: 192.168.50.92
	I1120 21:43:22.602570   47230 certs.go:195] generating shared ca certs ...
	I1120 21:43:22.602590   47230 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:22.602754   47230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 21:43:22.602793   47230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 21:43:22.602800   47230 certs.go:257] generating profile certs ...
	I1120 21:43:22.602905   47230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/client.key
	I1120 21:43:22.602969   47230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.key.82ea8a75
	I1120 21:43:22.603023   47230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.key
	I1120 21:43:22.603136   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem (1338 bytes)
	W1120 21:43:22.603166   47230 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706_empty.pem, impossibly tiny 0 bytes
	I1120 21:43:22.603175   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 21:43:22.603211   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:43:22.603234   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:43:22.603265   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 21:43:22.603302   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:22.603944   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:43:22.639643   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:43:22.678981   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:43:22.716825   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:43:22.830036   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 21:43:22.933831   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:43:23.049586   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:43:23.112614   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:43:23.205732   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /usr/share/ca-certificates/77062.pem (1708 bytes)
	I1120 21:43:23.322710   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:43:23.365437   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem --> /usr/share/ca-certificates/7706.pem (1338 bytes)
	I1120 21:43:23.443712   47230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:43:23.493912   47230 ssh_runner.go:195] Run: openssl version
	I1120 21:43:23.506870   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.531430   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7706.pem /etc/ssl/certs/7706.pem
	I1120 21:43:23.557455   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.571438   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:36 /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.571513   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.587455   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:43:23.611795   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.643085   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77062.pem /etc/ssl/certs/77062.pem
	I1120 21:43:23.672417   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.686036   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:36 /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.686104   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.708741   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:43:23.735870   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.826259   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:43:23.891448   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.907688   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.907794   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.928181   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:43:23.982840   47230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:43:24.001324   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:43:24.022305   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:43:24.044730   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:43:24.059730   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:43:24.073983   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:43:24.087235   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:43:24.102241   47230 kubeadm.go:401] StartCluster: {Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:24.102398   47230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:43:24.102462   47230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:43:24.205748   47230 cri.go:89] found id: "a8f530e568c757fdc6cf379f3aff3799f7ac9edc34796d92623ebca90bef7915"
	I1120 21:43:24.205788   47230 cri.go:89] found id: "83cd96810d2c877bdfa126a89328d7a35eb4be3fd8de4b2ed42c13193144713a"
	I1120 21:43:24.205794   47230 cri.go:89] found id: "8701c5fc6a886422420230e3fbea92c7d4aea86245ec3cc485da7f1aaae6a039"
	I1120 21:43:24.205799   47230 cri.go:89] found id: "8c5ac4300dcc187b93dcd172fa7be5d678471e2a1c514481aea543821e1648ed"
	I1120 21:43:24.205803   47230 cri.go:89] found id: "1d0718306f927d8437ba4a6e5d4e7118090ac488ca0a67da151e8d1900b4c8f8"
	I1120 21:43:24.205808   47230 cri.go:89] found id: "a4bab4186846f86bd976fb6b744cc894bcb7ba8a3c2aa0c4280a557962b79508"
	I1120 21:43:24.205812   47230 cri.go:89] found id: "f2f8984f6605cc119fd8d6509f611adccd97b1f8a92d063da3ba9b481c5f625a"
	I1120 21:43:24.205817   47230 cri.go:89] found id: "47912ef37c7f6bfb5e512cb8ba68e8722a5c82d599dac78f2a2efb6798d250e9"
	I1120 21:43:24.205820   47230 cri.go:89] found id: "2cde9ae8cae4f937f3ada12b4822797c5e72d4e0400b23ae5448cefd1047efaf"
	I1120 21:43:24.205828   47230 cri.go:89] found id: "22ef1f3a8c8b7f3776fba696f1e7097f4b1028136e3b96a6f7efae2623a45d66"
	I1120 21:43:24.205832   47230 cri.go:89] found id: ""
	I1120 21:43:24.205940   47230 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-763370 -n pause-763370
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-763370 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-763370 logs -n 25: (2.279433529s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-507207 sudo containerd config dump                                                                                                                │ cilium-507207             │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │                     │
	│ ssh     │ -p cilium-507207 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-507207             │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │                     │
	│ ssh     │ -p cilium-507207 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-507207             │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │                     │
	│ ssh     │ -p cilium-507207 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-507207             │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │                     │
	│ ssh     │ -p cilium-507207 sudo crio config                                                                                                                           │ cilium-507207             │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │                     │
	│ delete  │ -p cilium-507207                                                                                                                                            │ cilium-507207             │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:40 UTC │
	│ start   │ -p force-systemd-flag-463882 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                   │ force-systemd-flag-463882 │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-021825 │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │                     │
	│ start   │ -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-021825 │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:41 UTC │
	│ stop    │ stopped-upgrade-744498 stop                                                                                                                                 │ stopped-upgrade-744498    │ jenkins │ v1.32.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:40 UTC │
	│ start   │ -p stopped-upgrade-744498 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-744498    │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:41 UTC │
	│ delete  │ -p cert-expiration-925075                                                                                                                                   │ cert-expiration-925075    │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p guest-304958 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-304958              │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ ssh     │ force-systemd-flag-463882 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                        │ force-systemd-flag-463882 │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ delete  │ -p force-systemd-flag-463882                                                                                                                                │ force-systemd-flag-463882 │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p pause-763370 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-763370              │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:43 UTC │
	│ delete  │ -p kubernetes-upgrade-021825                                                                                                                                │ kubernetes-upgrade-021825 │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p auto-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:43 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-744498 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-744498    │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │                     │
	│ delete  │ -p stopped-upgrade-744498                                                                                                                                   │ stopped-upgrade-744498    │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p kindnet-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-507207            │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:43 UTC │
	│ start   │ -p calico-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio                        │ calico-507207             │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │                     │
	│ start   │ -p pause-763370 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-763370              │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p auto-507207 pgrep -a kubelet                                                                                                                             │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p kindnet-507207 pgrep -a kubelet                                                                                                                          │ kindnet-507207            │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:43:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:43:10.977962   47230 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:43:10.978357   47230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:43:10.978374   47230 out.go:374] Setting ErrFile to fd 2...
	I1120 21:43:10.978382   47230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:43:10.978732   47230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:43:10.979356   47230 out.go:368] Setting JSON to false
	I1120 21:43:10.980490   47230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5141,"bootTime":1763669850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:43:10.980560   47230 start.go:143] virtualization: kvm guest
	I1120 21:43:10.982789   47230 out.go:179] * [pause-763370] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:43:10.984237   47230 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:43:10.984253   47230 notify.go:221] Checking for updates...
	I1120 21:43:10.987663   47230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:43:10.989676   47230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:43:10.990960   47230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 21:43:10.992483   47230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:43:10.993701   47230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:43:10.995499   47230 config.go:182] Loaded profile config "pause-763370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:10.995966   47230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:43:11.038904   47230 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 21:43:11.042306   47230 start.go:309] selected driver: kvm2
	I1120 21:43:11.042331   47230 start.go:930] validating driver "kvm2" against &{Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:11.042534   47230 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:43:11.044027   47230 cni.go:84] Creating CNI manager for ""
	I1120 21:43:11.044103   47230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:43:11.044166   47230 start.go:353] cluster config:
	{Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:11.044376   47230 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:43:11.046877   47230 out.go:179] * Starting "pause-763370" primary control-plane node in "pause-763370" cluster
	I1120 21:43:06.465653   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:06.466576   46445 main.go:143] libmachine: no network interface addresses found for domain calico-507207 (source=lease)
	I1120 21:43:06.466603   46445 main.go:143] libmachine: trying to list again with source=arp
	I1120 21:43:06.467094   46445 main.go:143] libmachine: unable to find current IP address of domain calico-507207 in network mk-calico-507207 (interfaces detected: [])
	I1120 21:43:06.467137   46445 retry.go:31] will retry after 4.447175288s: waiting for domain to come up
	I1120 21:43:10.919581   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:10.920601   46445 main.go:143] libmachine: domain calico-507207 has current primary IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:10.920628   46445 main.go:143] libmachine: found domain IP: 192.168.83.30
	I1120 21:43:10.920639   46445 main.go:143] libmachine: reserving static IP address...
	I1120 21:43:10.921165   46445 main.go:143] libmachine: unable to find host DHCP lease matching {name: "calico-507207", mac: "52:54:00:8b:6e:d5", ip: "192.168.83.30"} in network mk-calico-507207
	I1120 21:43:11.161120   46445 main.go:143] libmachine: reserved static IP address 192.168.83.30 for domain calico-507207
	I1120 21:43:11.161146   46445 main.go:143] libmachine: waiting for SSH...
	I1120 21:43:11.161154   46445 main.go:143] libmachine: Getting to WaitForSSH function...
	I1120 21:43:11.164055   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.164539   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.164567   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.164804   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:11.165181   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:11.165201   46445 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1120 21:43:11.285470   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:43:11.285919   46445 main.go:143] libmachine: domain creation complete
	I1120 21:43:11.287523   46445 machine.go:94] provisionDockerMachine start ...
	I1120 21:43:11.290066   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.290470   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.290494   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.290668   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:11.290960   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:11.290979   46445 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:43:11.407600   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1120 21:43:11.407632   46445 buildroot.go:166] provisioning hostname "calico-507207"
	I1120 21:43:11.411153   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.411666   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.411705   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.411965   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:11.412323   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:11.412347   46445 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-507207 && echo "calico-507207" | sudo tee /etc/hostname
	I1120 21:43:11.048266   47230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:43:11.048300   47230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:43:11.048308   47230 cache.go:65] Caching tarball of preloaded images
	I1120 21:43:11.048403   47230 preload.go:238] Found /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:43:11.048420   47230 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:43:11.048598   47230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/config.json ...
	I1120 21:43:11.048832   47230 start.go:360] acquireMachinesLock for pause-763370: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 21:43:12.796933   47230 start.go:364] duration metric: took 1.748022714s to acquireMachinesLock for "pause-763370"
	I1120 21:43:12.797011   47230 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:43:12.797027   47230 fix.go:54] fixHost starting: 
	I1120 21:43:12.799576   47230 fix.go:112] recreateIfNeeded on pause-763370: state=Running err=<nil>
	W1120 21:43:12.799612   47230 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:43:13.337678   46379 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:43:13.337777   46379 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:43:13.337907   46379 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:43:13.338081   46379 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:43:13.338215   46379 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:43:13.338321   46379 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:43:13.340137   46379 out.go:252]   - Generating certificates and keys ...
	I1120 21:43:13.340225   46379 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:43:13.340302   46379 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:43:13.340398   46379 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:43:13.340495   46379 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:43:13.340624   46379 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:43:13.340713   46379 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:43:13.340825   46379 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:43:13.341031   46379 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-507207 localhost] and IPs [192.168.72.86 127.0.0.1 ::1]
	I1120 21:43:13.341110   46379 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:43:13.341295   46379 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-507207 localhost] and IPs [192.168.72.86 127.0.0.1 ::1]
	I1120 21:43:13.341382   46379 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:43:13.341465   46379 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:43:13.341525   46379 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:43:13.341604   46379 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:43:13.341671   46379 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:43:13.341752   46379 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:43:13.341846   46379 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:43:13.341962   46379 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:43:13.342043   46379 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:43:13.342168   46379 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:43:13.342267   46379 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:43:13.343825   46379 out.go:252]   - Booting up control plane ...
	I1120 21:43:13.343961   46379 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:43:13.344080   46379 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:43:13.344177   46379 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:43:13.344312   46379 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:43:13.344518   46379 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:43:13.344696   46379 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:43:13.344809   46379 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:43:13.344883   46379 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:43:13.345072   46379 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:43:13.345206   46379 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:43:13.345306   46379 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.009765248s
	I1120 21:43:13.345422   46379 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:43:13.345528   46379 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.86:8443/livez
	I1120 21:43:13.345653   46379 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:43:13.345763   46379 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:43:13.345880   46379 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.567923731s
	I1120 21:43:13.346001   46379 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.53811093s
	I1120 21:43:13.346106   46379 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.505162035s
	I1120 21:43:13.346260   46379 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:43:13.346453   46379 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:43:13.346558   46379 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:43:13.346821   46379 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-507207 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:43:13.346922   46379 kubeadm.go:319] [bootstrap-token] Using token: rr0fph.2tzjc9sivpbl0cbq
	I1120 21:43:13.348599   46379 out.go:252]   - Configuring RBAC rules ...
	I1120 21:43:13.348741   46379 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:43:13.348883   46379 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:43:13.349113   46379 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:43:13.349318   46379 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:43:13.349475   46379 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:43:13.349594   46379 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:43:13.349768   46379 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:43:13.349845   46379 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:43:13.349920   46379 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:43:13.349930   46379 kubeadm.go:319] 
	I1120 21:43:13.350016   46379 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:43:13.350028   46379 kubeadm.go:319] 
	I1120 21:43:13.350147   46379 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:43:13.350165   46379 kubeadm.go:319] 
	I1120 21:43:13.350200   46379 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:43:13.350284   46379 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:43:13.350357   46379 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:43:13.350365   46379 kubeadm.go:319] 
	I1120 21:43:13.350443   46379 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:43:13.350456   46379 kubeadm.go:319] 
	I1120 21:43:13.350522   46379 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:43:13.350532   46379 kubeadm.go:319] 
	I1120 21:43:13.350609   46379 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:43:13.350730   46379 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:43:13.350827   46379 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:43:13.350835   46379 kubeadm.go:319] 
	I1120 21:43:13.350962   46379 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:43:13.351081   46379 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:43:13.351094   46379 kubeadm.go:319] 
	I1120 21:43:13.351181   46379 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rr0fph.2tzjc9sivpbl0cbq \
	I1120 21:43:13.351310   46379 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 \
	I1120 21:43:13.351336   46379 kubeadm.go:319] 	--control-plane 
	I1120 21:43:13.351342   46379 kubeadm.go:319] 
	I1120 21:43:13.351463   46379 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:43:13.351476   46379 kubeadm.go:319] 
	I1120 21:43:13.351584   46379 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rr0fph.2tzjc9sivpbl0cbq \
	I1120 21:43:13.351755   46379 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 
	I1120 21:43:13.351769   46379 cni.go:84] Creating CNI manager for "kindnet"
	I1120 21:43:13.353819   46379 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:43:11.550640   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-507207
	
	I1120 21:43:11.553479   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.553953   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.553984   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.554196   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:11.554403   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:11.554426   46445 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-507207' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-507207/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-507207' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:43:11.682562   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:43:11.682602   46445 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 21:43:11.682646   46445 buildroot.go:174] setting up certificates
	I1120 21:43:11.682656   46445 provision.go:84] configureAuth start
	I1120 21:43:11.686447   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.686970   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.687012   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.689951   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.690358   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.690391   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.690544   46445 provision.go:143] copyHostCerts
	I1120 21:43:11.690626   46445 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem, removing ...
	I1120 21:43:11.690645   46445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem
	I1120 21:43:11.690739   46445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 21:43:11.690951   46445 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem, removing ...
	I1120 21:43:11.690969   46445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem
	I1120 21:43:11.691020   46445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 21:43:11.691112   46445 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem, removing ...
	I1120 21:43:11.691123   46445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem
	I1120 21:43:11.691172   46445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 21:43:11.691256   46445 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.calico-507207 san=[127.0.0.1 192.168.83.30 calico-507207 localhost minikube]
	I1120 21:43:11.991620   46445 provision.go:177] copyRemoteCerts
	I1120 21:43:11.991678   46445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:43:11.994666   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.995145   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.995181   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.995364   46445 sshutil.go:53] new ssh client: &{IP:192.168.83.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/calico-507207/id_rsa Username:docker}
	I1120 21:43:12.092099   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:43:12.135227   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:43:12.173628   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:43:12.207433   46445 provision.go:87] duration metric: took 524.745233ms to configureAuth
	I1120 21:43:12.207460   46445 buildroot.go:189] setting minikube options for container-runtime
	I1120 21:43:12.207669   46445 config.go:182] Loaded profile config "calico-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:12.210735   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.211256   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.211296   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.211481   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.211692   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:12.211712   46445 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:43:12.506763   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:43:12.506811   46445 machine.go:97] duration metric: took 1.219261033s to provisionDockerMachine
	I1120 21:43:12.506821   46445 client.go:176] duration metric: took 20.942020805s to LocalClient.Create
	I1120 21:43:12.506839   46445 start.go:167] duration metric: took 20.942078344s to libmachine.API.Create "calico-507207"
	I1120 21:43:12.506864   46445 start.go:293] postStartSetup for "calico-507207" (driver="kvm2")
	I1120 21:43:12.506878   46445 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:43:12.506967   46445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:43:12.511349   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.513474   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.513517   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.513732   46445 sshutil.go:53] new ssh client: &{IP:192.168.83.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/calico-507207/id_rsa Username:docker}
	I1120 21:43:12.606159   46445 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:43:12.615456   46445 info.go:137] Remote host: Buildroot 2025.02
	I1120 21:43:12.615489   46445 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 21:43:12.615583   46445 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 21:43:12.615679   46445 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem -> 77062.pem in /etc/ssl/certs
	I1120 21:43:12.615800   46445 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:43:12.634948   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:12.671513   46445 start.go:296] duration metric: took 164.633307ms for postStartSetup
	I1120 21:43:12.674837   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.675249   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.675273   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.675538   46445 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/config.json ...
	I1120 21:43:12.675740   46445 start.go:128] duration metric: took 21.113021512s to createHost
	I1120 21:43:12.678116   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.678547   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.678570   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.678785   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.679013   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:12.679025   46445 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 21:43:12.796742   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763674992.745795385
	
	I1120 21:43:12.796768   46445 fix.go:216] guest clock: 1763674992.745795385
	I1120 21:43:12.796778   46445 fix.go:229] Guest: 2025-11-20 21:43:12.745795385 +0000 UTC Remote: 2025-11-20 21:43:12.675753604 +0000 UTC m=+81.281458306 (delta=70.041781ms)
	I1120 21:43:12.796800   46445 fix.go:200] guest clock delta is within tolerance: 70.041781ms
	I1120 21:43:12.796807   46445 start.go:83] releasing machines lock for "calico-507207", held for 21.234251171s
	I1120 21:43:12.800917   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.801469   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.801505   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.802341   46445 ssh_runner.go:195] Run: cat /version.json
	I1120 21:43:12.802445   46445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:43:12.806679   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.806879   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.807236   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.807278   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.807325   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.807360   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.807569   46445 sshutil.go:53] new ssh client: &{IP:192.168.83.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/calico-507207/id_rsa Username:docker}
	I1120 21:43:12.807776   46445 sshutil.go:53] new ssh client: &{IP:192.168.83.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/calico-507207/id_rsa Username:docker}
	I1120 21:43:12.915023   46445 ssh_runner.go:195] Run: systemctl --version
	I1120 21:43:12.921824   46445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:43:13.100549   46445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:43:13.110151   46445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:43:13.110249   46445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:43:13.134789   46445 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:43:13.134812   46445 start.go:496] detecting cgroup driver to use...
	I1120 21:43:13.134904   46445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:43:13.161030   46445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:43:13.180945   46445 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:43:13.181008   46445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:43:13.201220   46445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:43:13.222316   46445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:43:13.426171   46445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:43:13.697654   46445 docker.go:234] disabling docker service ...
	I1120 21:43:13.697741   46445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:43:13.721062   46445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:43:13.738884   46445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:43:13.966724   46445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:43:14.147820   46445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:43:14.167307   46445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:43:14.193197   46445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:43:14.193268   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.208694   46445 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:43:14.208775   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.222402   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.237785   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.253289   46445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:43:14.267414   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.281271   46445 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.304666   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.317792   46445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:43:14.329719   46445 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1120 21:43:14.329790   46445 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1120 21:43:14.360622   46445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:43:14.378779   46445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:14.526554   46445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:43:14.661930   46445 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:43:14.662028   46445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:43:14.668545   46445 start.go:564] Will wait 60s for crictl version
	I1120 21:43:14.668619   46445 ssh_runner.go:195] Run: which crictl
	I1120 21:43:14.674963   46445 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 21:43:14.721030   46445 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 21:43:14.721115   46445 ssh_runner.go:195] Run: crio --version
	I1120 21:43:14.756922   46445 ssh_runner.go:195] Run: crio --version
	I1120 21:43:14.790743   46445 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	W1120 21:43:11.330930   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	W1120 21:43:13.832418   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	I1120 21:43:13.354946   46379 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:43:13.368140   46379 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:43:13.368164   46379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:43:13.397473   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:43:13.768831   46379 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:43:13.768990   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:13.769022   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-507207 minikube.k8s.io/updated_at=2025_11_20T21_43_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=kindnet-507207 minikube.k8s.io/primary=true
	I1120 21:43:13.843035   46379 ops.go:34] apiserver oom_adj: -16
	I1120 21:43:14.040880   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:14.541731   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:15.041599   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:12.801238   47230 out.go:252] * Updating the running kvm2 "pause-763370" VM ...
	I1120 21:43:12.801277   47230 machine.go:94] provisionDockerMachine start ...
	I1120 21:43:12.805648   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.806282   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:12.806322   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.806537   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.806831   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:12.806866   47230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:43:12.937431   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-763370
	
	I1120 21:43:12.937483   47230 buildroot.go:166] provisioning hostname "pause-763370"
	I1120 21:43:12.941914   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.942439   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:12.942475   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.942768   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.943104   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:12.943124   47230 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-763370 && echo "pause-763370" | sudo tee /etc/hostname
	I1120 21:43:13.087326   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-763370
	
	I1120 21:43:13.090606   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.091218   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.091270   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.091526   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:13.091814   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:13.091839   47230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-763370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-763370/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-763370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:43:13.219050   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:43:13.219095   47230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 21:43:13.219159   47230 buildroot.go:174] setting up certificates
	I1120 21:43:13.219171   47230 provision.go:84] configureAuth start
	I1120 21:43:13.223070   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.223707   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.223744   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226312   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226704   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.226742   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226930   47230 provision.go:143] copyHostCerts
	I1120 21:43:13.226985   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem, removing ...
	I1120 21:43:13.226998   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem
	I1120 21:43:13.227062   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 21:43:13.227170   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem, removing ...
	I1120 21:43:13.227186   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem
	I1120 21:43:13.227210   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 21:43:13.227267   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem, removing ...
	I1120 21:43:13.227274   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem
	I1120 21:43:13.227293   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 21:43:13.227341   47230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.pause-763370 san=[127.0.0.1 192.168.50.92 localhost minikube pause-763370]
	I1120 21:43:13.394135   47230 provision.go:177] copyRemoteCerts
	I1120 21:43:13.394198   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:43:13.397579   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.398078   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.398103   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.398270   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:13.496052   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:43:13.537847   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1120 21:43:13.591402   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:43:13.631078   47230 provision.go:87] duration metric: took 411.891808ms to configureAuth
	I1120 21:43:13.631111   47230 buildroot.go:189] setting minikube options for container-runtime
	I1120 21:43:13.631393   47230 config.go:182] Loaded profile config "pause-763370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:13.634843   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.635404   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.635444   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.635679   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:13.636000   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:13.636028   47230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:43:14.794917   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:14.795363   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:14.795387   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:14.795610   46445 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1120 21:43:14.800716   46445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:43:14.817405   46445 kubeadm.go:884] updating cluster {Name:calico-507207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:calico-507207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.83.30 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:43:14.817510   46445 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:43:14.817556   46445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:14.852130   46445 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 21:43:14.852225   46445 ssh_runner.go:195] Run: which lz4
	I1120 21:43:14.857292   46445 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1120 21:43:14.862790   46445 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1120 21:43:14.862822   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1120 21:43:15.542009   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:16.041532   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:16.542033   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:17.041600   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:17.541753   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:17.696230   46379 kubeadm.go:1114] duration metric: took 3.927330153s to wait for elevateKubeSystemPrivileges
	I1120 21:43:17.696282   46379 kubeadm.go:403] duration metric: took 17.531680113s to StartCluster
	I1120 21:43:17.696304   46379 settings.go:142] acquiring lock: {Name:mke92973c8f33ef32fe11f7b266adf74cd3ec47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:17.696389   46379 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:43:17.698050   46379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:17.698314   46379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:43:17.698318   46379 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.86 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:43:17.698399   46379 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:43:17.698492   46379 addons.go:70] Setting storage-provisioner=true in profile "kindnet-507207"
	I1120 21:43:17.698504   46379 config.go:182] Loaded profile config "kindnet-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:17.698518   46379 addons.go:239] Setting addon storage-provisioner=true in "kindnet-507207"
	I1120 21:43:17.698535   46379 addons.go:70] Setting default-storageclass=true in profile "kindnet-507207"
	I1120 21:43:17.698548   46379 host.go:66] Checking if "kindnet-507207" exists ...
	I1120 21:43:17.698568   46379 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-507207"
	I1120 21:43:17.700338   46379 out.go:179] * Verifying Kubernetes components...
	I1120 21:43:17.702534   46379 addons.go:239] Setting addon default-storageclass=true in "kindnet-507207"
	I1120 21:43:17.702585   46379 host.go:66] Checking if "kindnet-507207" exists ...
	I1120 21:43:17.704505   46379 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:43:17.704526   46379 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:43:17.705025   46379 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:43:17.705028   46379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:17.706379   46379 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:43:17.706398   46379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:43:17.708032   46379 main.go:143] libmachine: domain kindnet-507207 has defined MAC address 52:54:00:7f:03:9a in network mk-kindnet-507207
	I1120 21:43:17.708623   46379 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:03:9a", ip: ""} in network mk-kindnet-507207: {Iface:virbr4 ExpiryTime:2025-11-20 22:42:48 +0000 UTC Type:0 Mac:52:54:00:7f:03:9a Iaid: IPaddr:192.168.72.86 Prefix:24 Hostname:kindnet-507207 Clientid:01:52:54:00:7f:03:9a}
	I1120 21:43:17.708654   46379 main.go:143] libmachine: domain kindnet-507207 has defined IP address 192.168.72.86 and MAC address 52:54:00:7f:03:9a in network mk-kindnet-507207
	I1120 21:43:17.708886   46379 sshutil.go:53] new ssh client: &{IP:192.168.72.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/kindnet-507207/id_rsa Username:docker}
	I1120 21:43:17.709652   46379 main.go:143] libmachine: domain kindnet-507207 has defined MAC address 52:54:00:7f:03:9a in network mk-kindnet-507207
	I1120 21:43:17.710201   46379 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:03:9a", ip: ""} in network mk-kindnet-507207: {Iface:virbr4 ExpiryTime:2025-11-20 22:42:48 +0000 UTC Type:0 Mac:52:54:00:7f:03:9a Iaid: IPaddr:192.168.72.86 Prefix:24 Hostname:kindnet-507207 Clientid:01:52:54:00:7f:03:9a}
	I1120 21:43:17.710235   46379 main.go:143] libmachine: domain kindnet-507207 has defined IP address 192.168.72.86 and MAC address 52:54:00:7f:03:9a in network mk-kindnet-507207
	I1120 21:43:17.710435   46379 sshutil.go:53] new ssh client: &{IP:192.168.72.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/kindnet-507207/id_rsa Username:docker}
	I1120 21:43:18.009369   46379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:43:18.128996   46379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:43:18.280102   46379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:43:18.468040   46379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:43:18.941708   46379 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1120 21:43:18.942969   46379 node_ready.go:35] waiting up to 15m0s for node "kindnet-507207" to be "Ready" ...
	I1120 21:43:19.450444   46379 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-507207" context rescaled to 1 replicas
	I1120 21:43:19.593315   46379 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125229418s)
	I1120 21:43:19.594793   46379 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1120 21:43:16.329884   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	W1120 21:43:18.333605   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	I1120 21:43:19.595927   46379 addons.go:515] duration metric: took 1.897522319s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 21:43:16.726066   46445 crio.go:462] duration metric: took 1.868818577s to copy over tarball
	I1120 21:43:16.726163   46445 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1120 21:43:18.628353   46445 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.902149532s)
	I1120 21:43:18.628395   46445 crio.go:469] duration metric: took 1.902293503s to extract the tarball
	I1120 21:43:18.628406   46445 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1120 21:43:18.682597   46445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:18.739529   46445 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:43:18.739560   46445 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:43:18.739572   46445 kubeadm.go:935] updating node { 192.168.83.30 8443 v1.34.1 crio true true} ...
	I1120 21:43:18.739687   46445 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-507207 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-507207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1120 21:43:18.739775   46445 ssh_runner.go:195] Run: crio config
	I1120 21:43:18.797735   46445 cni.go:84] Creating CNI manager for "calico"
	I1120 21:43:18.797780   46445 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:43:18.797810   46445 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.30 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-507207 NodeName:calico-507207 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:43:18.798018   46445 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-507207"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:43:18.798106   46445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:43:18.812415   46445 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:43:18.812506   46445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:43:18.826043   46445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1120 21:43:18.858191   46445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:43:18.884370   46445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1120 21:43:18.914663   46445 ssh_runner.go:195] Run: grep 192.168.83.30	control-plane.minikube.internal$ /etc/hosts
	I1120 21:43:18.921359   46445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:43:18.943614   46445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:19.158594   46445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:43:19.201422   46445 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207 for IP: 192.168.83.30
	I1120 21:43:19.201453   46445 certs.go:195] generating shared ca certs ...
	I1120 21:43:19.201491   46445 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.201668   46445 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 21:43:19.201730   46445 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 21:43:19.201741   46445 certs.go:257] generating profile certs ...
	I1120 21:43:19.201829   46445 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.key
	I1120 21:43:19.201879   46445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt with IP's: []
	I1120 21:43:19.290057   46445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt ...
	I1120 21:43:19.290090   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: {Name:mk8f923f848c03ed741c45e7ba45e75e4c375b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.290300   46445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.key ...
	I1120 21:43:19.290316   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.key: {Name:mk6275d0f8481b4dae6a07659889c325cb1e0d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.290444   46445 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key.9e7bb764
	I1120 21:43:19.290472   46445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt.9e7bb764 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.30]
	I1120 21:43:19.490899   46445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt.9e7bb764 ...
	I1120 21:43:19.490932   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt.9e7bb764: {Name:mk1036f22022b13f534f27f2e23460d522660a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.491137   46445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key.9e7bb764 ...
	I1120 21:43:19.491156   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key.9e7bb764: {Name:mk187c6c1f2ecdf12600aed5e9c8ec401ed7e45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.491268   46445 certs.go:382] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt.9e7bb764 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt
	I1120 21:43:19.491372   46445 certs.go:386] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key.9e7bb764 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key
	I1120 21:43:19.491454   46445 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.key
	I1120 21:43:19.491473   46445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.crt with IP's: []
	I1120 21:43:19.721629   46445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.crt ...
	I1120 21:43:19.721665   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.crt: {Name:mk88d9c0eb4cdcce476b3c18d0d6d3c109e71e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.721897   46445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.key ...
	I1120 21:43:19.721924   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.key: {Name:mkeec0206f16d1ef05d4ed4ea26f2c23aa44a015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.722143   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem (1338 bytes)
	W1120 21:43:19.722182   46445 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706_empty.pem, impossibly tiny 0 bytes
	I1120 21:43:19.722191   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 21:43:19.722213   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:43:19.722234   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:43:19.722255   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 21:43:19.722292   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:19.722827   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:43:19.759961   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:43:19.800903   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:43:19.841034   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:43:19.879179   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 21:43:19.933934   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:43:19.977243   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:43:20.013923   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:43:20.050520   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:43:20.092091   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem --> /usr/share/ca-certificates/7706.pem (1338 bytes)
	I1120 21:43:20.129917   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /usr/share/ca-certificates/77062.pem (1708 bytes)
	I1120 21:43:20.171308   46445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:43:20.200414   46445 ssh_runner.go:195] Run: openssl version
	I1120 21:43:20.208330   46445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77062.pem
	I1120 21:43:20.221348   46445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77062.pem /etc/ssl/certs/77062.pem
	I1120 21:43:20.236649   46445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77062.pem
	I1120 21:43:20.243285   46445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:36 /usr/share/ca-certificates/77062.pem
	I1120 21:43:20.243360   46445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77062.pem
	I1120 21:43:20.251893   46445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:43:20.267188   46445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/77062.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:43:20.282030   46445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:20.297483   46445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:43:20.314463   46445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:20.323449   46445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:20.323537   46445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:20.334177   46445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:43:20.352927   46445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:43:20.372269   46445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7706.pem
	I1120 21:43:20.389645   46445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7706.pem /etc/ssl/certs/7706.pem
	I1120 21:43:20.405875   46445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7706.pem
	I1120 21:43:20.413878   46445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:36 /usr/share/ca-certificates/7706.pem
	I1120 21:43:20.413962   46445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7706.pem
	I1120 21:43:20.423718   46445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:43:20.437863   46445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7706.pem /etc/ssl/certs/51391683.0
	I1120 21:43:20.452939   46445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:43:20.459111   46445 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:43:20.459173   46445 kubeadm.go:401] StartCluster: {Name:calico-507207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:calico-507207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.83.30 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:20.459256   46445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:43:20.459314   46445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:43:20.510188   46445 cri.go:89] found id: ""
	I1120 21:43:20.510282   46445 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:43:20.528031   46445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:43:20.543218   46445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:43:20.557314   46445 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:43:20.557336   46445 kubeadm.go:158] found existing configuration files:
	
	I1120 21:43:20.557393   46445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:43:20.573323   46445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:43:20.573396   46445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:43:20.586687   46445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:43:20.607619   46445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:43:20.607693   46445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:43:20.622615   46445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:43:20.636571   46445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:43:20.636645   46445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:43:20.655844   46445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:43:20.671216   46445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:43:20.671292   46445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:43:20.688117   46445 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1120 21:43:20.753280   46445 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:43:20.753383   46445 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:43:20.888634   46445 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:43:20.888794   46445 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:43:20.888973   46445 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:43:20.911020   46445 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:43:19.366295   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:43:19.366321   47230 machine.go:97] duration metric: took 6.565033306s to provisionDockerMachine
	I1120 21:43:19.366334   47230 start.go:293] postStartSetup for "pause-763370" (driver="kvm2")
	I1120 21:43:19.366346   47230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:43:19.366430   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:43:19.370029   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.370516   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.370543   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.370714   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.467003   47230 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:43:19.473573   47230 info.go:137] Remote host: Buildroot 2025.02
	I1120 21:43:19.473609   47230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 21:43:19.473701   47230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 21:43:19.473831   47230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem -> 77062.pem in /etc/ssl/certs
	I1120 21:43:19.474040   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:43:19.494153   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:19.535592   47230 start.go:296] duration metric: took 169.240571ms for postStartSetup
	I1120 21:43:19.535640   47230 fix.go:56] duration metric: took 6.738612108s for fixHost
	I1120 21:43:19.539008   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.539485   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.539520   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.539742   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:19.540068   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:19.540082   47230 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 21:43:19.661922   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763674999.654051230
	
	I1120 21:43:19.661948   47230 fix.go:216] guest clock: 1763674999.654051230
	I1120 21:43:19.661972   47230 fix.go:229] Guest: 2025-11-20 21:43:19.65405123 +0000 UTC Remote: 2025-11-20 21:43:19.535646072 +0000 UTC m=+8.619190318 (delta=118.405158ms)
	I1120 21:43:19.661993   47230 fix.go:200] guest clock delta is within tolerance: 118.405158ms
	I1120 21:43:19.661999   47230 start.go:83] releasing machines lock for "pause-763370", held for 6.86502006s
	I1120 21:43:19.665305   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.665827   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.665871   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.666470   47230 ssh_runner.go:195] Run: cat /version.json
	I1120 21:43:19.666517   47230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:43:19.670623   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.670663   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671158   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.671199   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671213   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.671246   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671589   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.671750   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.757220   47230 ssh_runner.go:195] Run: systemctl --version
	I1120 21:43:19.791329   47230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:43:19.958183   47230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:43:19.972171   47230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:43:19.972256   47230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:43:19.986821   47230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:43:19.986878   47230 start.go:496] detecting cgroup driver to use...
	I1120 21:43:19.986960   47230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:43:20.020155   47230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:43:20.042276   47230 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:43:20.042351   47230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:43:20.075095   47230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:43:20.096418   47230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:43:20.313659   47230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:43:20.553252   47230 docker.go:234] disabling docker service ...
	I1120 21:43:20.553344   47230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:43:20.586764   47230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:43:20.604836   47230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:43:20.829605   47230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:43:20.944037   46445 out.go:252]   - Generating certificates and keys ...
	I1120 21:43:20.944143   46445 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:43:20.944269   46445 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:43:21.449215   46445 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:43:21.028720   47230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:43:21.047746   47230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:43:21.075961   47230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:43:21.076021   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.091397   47230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:43:21.091494   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.105351   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.120611   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.139624   47230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:43:21.157792   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.172905   47230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.186929   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.202707   47230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:43:21.217837   47230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:43:21.232547   47230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:21.437520   47230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:43:22.024669   47230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:43:22.024747   47230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:43:22.032424   47230 start.go:564] Will wait 60s for crictl version
	I1120 21:43:22.032500   47230 ssh_runner.go:195] Run: which crictl
	I1120 21:43:22.037409   47230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 21:43:22.077081   47230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 21:43:22.077174   47230 ssh_runner.go:195] Run: crio --version
	I1120 21:43:22.112251   47230 ssh_runner.go:195] Run: crio --version
	I1120 21:43:22.147198   47230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1120 21:43:21.575325   46445 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:43:21.724903   46445 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:43:21.788174   46445 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:43:22.007441   46445 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:43:22.007627   46445 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-507207 localhost] and IPs [192.168.83.30 127.0.0.1 ::1]
	I1120 21:43:22.231958   46445 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:43:22.232147   46445 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-507207 localhost] and IPs [192.168.83.30 127.0.0.1 ::1]
	I1120 21:43:22.269672   46445 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:43:23.092368   46445 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:43:23.833567   46445 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:43:23.833917   46445 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:43:23.950391   46445 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:43:24.173137   46445 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:43:24.368804   46445 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:43:24.505146   46445 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:43:24.600056   46445 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:43:24.601314   46445 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:43:24.603402   46445 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1120 21:43:20.832408   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	W1120 21:43:23.332755   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	W1120 21:43:21.308384   46379 node_ready.go:57] node "kindnet-507207" has "Ready":"False" status (will retry)
	W1120 21:43:23.447042   46379 node_ready.go:57] node "kindnet-507207" has "Ready":"False" status (will retry)
	W1120 21:43:25.447636   46379 node_ready.go:57] node "kindnet-507207" has "Ready":"False" status (will retry)
	I1120 21:43:22.151588   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:22.152255   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:22.152291   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:22.152619   47230 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1120 21:43:22.157982   47230 kubeadm.go:884] updating cluster {Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:43:22.158171   47230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:43:22.158223   47230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:22.211591   47230 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:43:22.211614   47230 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:43:22.211680   47230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:22.247690   47230 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:43:22.247712   47230 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:43:22.247719   47230 kubeadm.go:935] updating node { 192.168.50.92 8443 v1.34.1 crio true true} ...
	I1120 21:43:22.247814   47230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-763370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:43:22.247893   47230 ssh_runner.go:195] Run: crio config
	I1120 21:43:22.302915   47230 cni.go:84] Creating CNI manager for ""
	I1120 21:43:22.302938   47230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:43:22.302952   47230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:43:22.302972   47230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.92 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-763370 NodeName:pause-763370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:43:22.303099   47230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-763370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:43:22.303169   47230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:43:22.318421   47230 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:43:22.318491   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:43:22.332429   47230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1120 21:43:22.355454   47230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:43:22.381174   47230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1120 21:43:22.404131   47230 ssh_runner.go:195] Run: grep 192.168.50.92	control-plane.minikube.internal$ /etc/hosts
	I1120 21:43:22.409397   47230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:22.580909   47230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:43:22.602545   47230 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370 for IP: 192.168.50.92
	I1120 21:43:22.602570   47230 certs.go:195] generating shared ca certs ...
	I1120 21:43:22.602590   47230 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:22.602754   47230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 21:43:22.602793   47230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 21:43:22.602800   47230 certs.go:257] generating profile certs ...
	I1120 21:43:22.602905   47230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/client.key
	I1120 21:43:22.602969   47230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.key.82ea8a75
	I1120 21:43:22.603023   47230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.key
	I1120 21:43:22.603136   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem (1338 bytes)
	W1120 21:43:22.603166   47230 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706_empty.pem, impossibly tiny 0 bytes
	I1120 21:43:22.603175   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 21:43:22.603211   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:43:22.603234   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:43:22.603265   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 21:43:22.603302   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:22.603944   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:43:22.639643   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:43:22.678981   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:43:22.716825   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:43:22.830036   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 21:43:22.933831   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:43:23.049586   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:43:23.112614   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:43:23.205732   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /usr/share/ca-certificates/77062.pem (1708 bytes)
	I1120 21:43:23.322710   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:43:23.365437   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem --> /usr/share/ca-certificates/7706.pem (1338 bytes)
	I1120 21:43:23.443712   47230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:43:23.493912   47230 ssh_runner.go:195] Run: openssl version
	I1120 21:43:23.506870   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.531430   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7706.pem /etc/ssl/certs/7706.pem
	I1120 21:43:23.557455   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.571438   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:36 /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.571513   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.587455   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:43:23.611795   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.643085   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77062.pem /etc/ssl/certs/77062.pem
	I1120 21:43:23.672417   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.686036   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:36 /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.686104   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.708741   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:43:23.735870   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.826259   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:43:23.891448   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.907688   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.907794   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.928181   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:43:23.982840   47230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:43:24.001324   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:43:24.022305   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:43:24.044730   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:43:24.059730   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:43:24.073983   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:43:24.087235   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:43:24.102241   47230 kubeadm.go:401] StartCluster: {Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:24.102398   47230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:43:24.102462   47230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:43:24.205748   47230 cri.go:89] found id: "a8f530e568c757fdc6cf379f3aff3799f7ac9edc34796d92623ebca90bef7915"
	I1120 21:43:24.205788   47230 cri.go:89] found id: "83cd96810d2c877bdfa126a89328d7a35eb4be3fd8de4b2ed42c13193144713a"
	I1120 21:43:24.205794   47230 cri.go:89] found id: "8701c5fc6a886422420230e3fbea92c7d4aea86245ec3cc485da7f1aaae6a039"
	I1120 21:43:24.205799   47230 cri.go:89] found id: "8c5ac4300dcc187b93dcd172fa7be5d678471e2a1c514481aea543821e1648ed"
	I1120 21:43:24.205803   47230 cri.go:89] found id: "1d0718306f927d8437ba4a6e5d4e7118090ac488ca0a67da151e8d1900b4c8f8"
	I1120 21:43:24.205808   47230 cri.go:89] found id: "a4bab4186846f86bd976fb6b744cc894bcb7ba8a3c2aa0c4280a557962b79508"
	I1120 21:43:24.205812   47230 cri.go:89] found id: "f2f8984f6605cc119fd8d6509f611adccd97b1f8a92d063da3ba9b481c5f625a"
	I1120 21:43:24.205817   47230 cri.go:89] found id: "47912ef37c7f6bfb5e512cb8ba68e8722a5c82d599dac78f2a2efb6798d250e9"
	I1120 21:43:24.205820   47230 cri.go:89] found id: "2cde9ae8cae4f937f3ada12b4822797c5e72d4e0400b23ae5448cefd1047efaf"
	I1120 21:43:24.205828   47230 cri.go:89] found id: "22ef1f3a8c8b7f3776fba696f1e7097f4b1028136e3b96a6f7efae2623a45d66"
	I1120 21:43:24.205832   47230 cri.go:89] found id: ""
	I1120 21:43:24.205940   47230 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-763370 -n pause-763370
helpers_test.go:269: (dbg) Run:  kubectl --context pause-763370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-763370 -n pause-763370
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-763370 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-763370 logs -n 25: (1.675342639s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p force-systemd-flag-463882 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                   │ force-systemd-flag-463882 │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-021825 │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │                     │
	│ start   │ -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-021825 │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:41 UTC │
	│ stop    │ stopped-upgrade-744498 stop                                                                                                                                 │ stopped-upgrade-744498    │ jenkins │ v1.32.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:40 UTC │
	│ start   │ -p stopped-upgrade-744498 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-744498    │ jenkins │ v1.37.0 │ 20 Nov 25 21:40 UTC │ 20 Nov 25 21:41 UTC │
	│ delete  │ -p cert-expiration-925075                                                                                                                                   │ cert-expiration-925075    │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p guest-304958 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-304958              │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ ssh     │ force-systemd-flag-463882 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                        │ force-systemd-flag-463882 │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ delete  │ -p force-systemd-flag-463882                                                                                                                                │ force-systemd-flag-463882 │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p pause-763370 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-763370              │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:43 UTC │
	│ delete  │ -p kubernetes-upgrade-021825                                                                                                                                │ kubernetes-upgrade-021825 │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p auto-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:43 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-744498 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-744498    │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │                     │
	│ delete  │ -p stopped-upgrade-744498                                                                                                                                   │ stopped-upgrade-744498    │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:41 UTC │
	│ start   │ -p kindnet-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-507207            │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │ 20 Nov 25 21:43 UTC │
	│ start   │ -p calico-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio                        │ calico-507207             │ jenkins │ v1.37.0 │ 20 Nov 25 21:41 UTC │                     │
	│ start   │ -p pause-763370 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-763370              │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p auto-507207 pgrep -a kubelet                                                                                                                             │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p kindnet-507207 pgrep -a kubelet                                                                                                                          │ kindnet-507207            │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p auto-507207 sudo cat /etc/nsswitch.conf                                                                                                                  │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p auto-507207 sudo cat /etc/hosts                                                                                                                          │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p auto-507207 sudo cat /etc/resolv.conf                                                                                                                    │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p auto-507207 sudo crictl pods                                                                                                                             │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p auto-507207 sudo crictl ps --all                                                                                                                         │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	│ ssh     │ -p auto-507207 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                  │ auto-507207               │ jenkins │ v1.37.0 │ 20 Nov 25 21:43 UTC │ 20 Nov 25 21:43 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:43:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:43:10.977962   47230 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:43:10.978357   47230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:43:10.978374   47230 out.go:374] Setting ErrFile to fd 2...
	I1120 21:43:10.978382   47230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:43:10.978732   47230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:43:10.979356   47230 out.go:368] Setting JSON to false
	I1120 21:43:10.980490   47230 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5141,"bootTime":1763669850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:43:10.980560   47230 start.go:143] virtualization: kvm guest
	I1120 21:43:10.982789   47230 out.go:179] * [pause-763370] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:43:10.984237   47230 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:43:10.984253   47230 notify.go:221] Checking for updates...
	I1120 21:43:10.987663   47230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:43:10.989676   47230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:43:10.990960   47230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 21:43:10.992483   47230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:43:10.993701   47230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:43:10.995499   47230 config.go:182] Loaded profile config "pause-763370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:10.995966   47230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:43:11.038904   47230 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 21:43:11.042306   47230 start.go:309] selected driver: kvm2
	I1120 21:43:11.042331   47230 start.go:930] validating driver "kvm2" against &{Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:11.042534   47230 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:43:11.044027   47230 cni.go:84] Creating CNI manager for ""
	I1120 21:43:11.044103   47230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:43:11.044166   47230 start.go:353] cluster config:
	{Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:11.044376   47230 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:43:11.046877   47230 out.go:179] * Starting "pause-763370" primary control-plane node in "pause-763370" cluster
	I1120 21:43:06.465653   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:06.466576   46445 main.go:143] libmachine: no network interface addresses found for domain calico-507207 (source=lease)
	I1120 21:43:06.466603   46445 main.go:143] libmachine: trying to list again with source=arp
	I1120 21:43:06.467094   46445 main.go:143] libmachine: unable to find current IP address of domain calico-507207 in network mk-calico-507207 (interfaces detected: [])
	I1120 21:43:06.467137   46445 retry.go:31] will retry after 4.447175288s: waiting for domain to come up
	I1120 21:43:10.919581   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:10.920601   46445 main.go:143] libmachine: domain calico-507207 has current primary IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:10.920628   46445 main.go:143] libmachine: found domain IP: 192.168.83.30
	I1120 21:43:10.920639   46445 main.go:143] libmachine: reserving static IP address...
	I1120 21:43:10.921165   46445 main.go:143] libmachine: unable to find host DHCP lease matching {name: "calico-507207", mac: "52:54:00:8b:6e:d5", ip: "192.168.83.30"} in network mk-calico-507207
	I1120 21:43:11.161120   46445 main.go:143] libmachine: reserved static IP address 192.168.83.30 for domain calico-507207
	I1120 21:43:11.161146   46445 main.go:143] libmachine: waiting for SSH...
	I1120 21:43:11.161154   46445 main.go:143] libmachine: Getting to WaitForSSH function...
	I1120 21:43:11.164055   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.164539   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.164567   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.164804   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:11.165181   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:11.165201   46445 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1120 21:43:11.285470   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:43:11.285919   46445 main.go:143] libmachine: domain creation complete
	I1120 21:43:11.287523   46445 machine.go:94] provisionDockerMachine start ...
	I1120 21:43:11.290066   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.290470   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.290494   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.290668   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:11.290960   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:11.290979   46445 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:43:11.407600   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1120 21:43:11.407632   46445 buildroot.go:166] provisioning hostname "calico-507207"
	I1120 21:43:11.411153   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.411666   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.411705   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.411965   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:11.412323   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:11.412347   46445 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-507207 && echo "calico-507207" | sudo tee /etc/hostname
	I1120 21:43:11.048266   47230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:43:11.048300   47230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:43:11.048308   47230 cache.go:65] Caching tarball of preloaded images
	I1120 21:43:11.048403   47230 preload.go:238] Found /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:43:11.048420   47230 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:43:11.048598   47230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/config.json ...
	I1120 21:43:11.048832   47230 start.go:360] acquireMachinesLock for pause-763370: {Name:mk53bc85b26a4546a3522126277fc9a0cbbc52b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1120 21:43:12.796933   47230 start.go:364] duration metric: took 1.748022714s to acquireMachinesLock for "pause-763370"
	I1120 21:43:12.797011   47230 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:43:12.797027   47230 fix.go:54] fixHost starting: 
	I1120 21:43:12.799576   47230 fix.go:112] recreateIfNeeded on pause-763370: state=Running err=<nil>
	W1120 21:43:12.799612   47230 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:43:13.337678   46379 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:43:13.337777   46379 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:43:13.337907   46379 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:43:13.338081   46379 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:43:13.338215   46379 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:43:13.338321   46379 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:43:13.340137   46379 out.go:252]   - Generating certificates and keys ...
	I1120 21:43:13.340225   46379 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:43:13.340302   46379 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:43:13.340398   46379 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:43:13.340495   46379 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:43:13.340624   46379 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:43:13.340713   46379 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:43:13.340825   46379 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:43:13.341031   46379 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-507207 localhost] and IPs [192.168.72.86 127.0.0.1 ::1]
	I1120 21:43:13.341110   46379 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:43:13.341295   46379 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-507207 localhost] and IPs [192.168.72.86 127.0.0.1 ::1]
	I1120 21:43:13.341382   46379 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:43:13.341465   46379 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:43:13.341525   46379 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:43:13.341604   46379 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:43:13.341671   46379 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:43:13.341752   46379 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:43:13.341846   46379 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:43:13.341962   46379 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:43:13.342043   46379 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:43:13.342168   46379 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:43:13.342267   46379 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:43:13.343825   46379 out.go:252]   - Booting up control plane ...
	I1120 21:43:13.343961   46379 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:43:13.344080   46379 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:43:13.344177   46379 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:43:13.344312   46379 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:43:13.344518   46379 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:43:13.344696   46379 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:43:13.344809   46379 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:43:13.344883   46379 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:43:13.345072   46379 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:43:13.345206   46379 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:43:13.345306   46379 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.009765248s
	I1120 21:43:13.345422   46379 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:43:13.345528   46379 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.86:8443/livez
	I1120 21:43:13.345653   46379 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:43:13.345763   46379 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:43:13.345880   46379 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.567923731s
	I1120 21:43:13.346001   46379 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.53811093s
	I1120 21:43:13.346106   46379 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.505162035s
	I1120 21:43:13.346260   46379 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:43:13.346453   46379 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:43:13.346558   46379 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:43:13.346821   46379 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-507207 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:43:13.346922   46379 kubeadm.go:319] [bootstrap-token] Using token: rr0fph.2tzjc9sivpbl0cbq
	I1120 21:43:13.348599   46379 out.go:252]   - Configuring RBAC rules ...
	I1120 21:43:13.348741   46379 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:43:13.348883   46379 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:43:13.349113   46379 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:43:13.349318   46379 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:43:13.349475   46379 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:43:13.349594   46379 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:43:13.349768   46379 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:43:13.349845   46379 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:43:13.349920   46379 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:43:13.349930   46379 kubeadm.go:319] 
	I1120 21:43:13.350016   46379 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:43:13.350028   46379 kubeadm.go:319] 
	I1120 21:43:13.350147   46379 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:43:13.350165   46379 kubeadm.go:319] 
	I1120 21:43:13.350200   46379 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:43:13.350284   46379 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:43:13.350357   46379 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:43:13.350365   46379 kubeadm.go:319] 
	I1120 21:43:13.350443   46379 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:43:13.350456   46379 kubeadm.go:319] 
	I1120 21:43:13.350522   46379 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:43:13.350532   46379 kubeadm.go:319] 
	I1120 21:43:13.350609   46379 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:43:13.350730   46379 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:43:13.350827   46379 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:43:13.350835   46379 kubeadm.go:319] 
	I1120 21:43:13.350962   46379 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:43:13.351081   46379 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:43:13.351094   46379 kubeadm.go:319] 
	I1120 21:43:13.351181   46379 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rr0fph.2tzjc9sivpbl0cbq \
	I1120 21:43:13.351310   46379 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 \
	I1120 21:43:13.351336   46379 kubeadm.go:319] 	--control-plane 
	I1120 21:43:13.351342   46379 kubeadm.go:319] 
	I1120 21:43:13.351463   46379 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:43:13.351476   46379 kubeadm.go:319] 
	I1120 21:43:13.351584   46379 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rr0fph.2tzjc9sivpbl0cbq \
	I1120 21:43:13.351755   46379 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7cd516dade98578ade3008f10032d26b18442d35f44ff9f19e57267900c01439 
	I1120 21:43:13.351769   46379 cni.go:84] Creating CNI manager for "kindnet"
	I1120 21:43:13.353819   46379 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:43:11.550640   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-507207
	
	I1120 21:43:11.553479   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.553953   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.553984   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.554196   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:11.554403   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:11.554426   46445 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-507207' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-507207/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-507207' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:43:11.682562   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:43:11.682602   46445 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 21:43:11.682646   46445 buildroot.go:174] setting up certificates
	I1120 21:43:11.682656   46445 provision.go:84] configureAuth start
	I1120 21:43:11.686447   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.686970   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.687012   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.689951   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.690358   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.690391   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.690544   46445 provision.go:143] copyHostCerts
	I1120 21:43:11.690626   46445 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem, removing ...
	I1120 21:43:11.690645   46445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem
	I1120 21:43:11.690739   46445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 21:43:11.690951   46445 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem, removing ...
	I1120 21:43:11.690969   46445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem
	I1120 21:43:11.691020   46445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 21:43:11.691112   46445 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem, removing ...
	I1120 21:43:11.691123   46445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem
	I1120 21:43:11.691172   46445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 21:43:11.691256   46445 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.calico-507207 san=[127.0.0.1 192.168.83.30 calico-507207 localhost minikube]
	I1120 21:43:11.991620   46445 provision.go:177] copyRemoteCerts
	I1120 21:43:11.991678   46445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:43:11.994666   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.995145   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:11.995181   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:11.995364   46445 sshutil.go:53] new ssh client: &{IP:192.168.83.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/calico-507207/id_rsa Username:docker}
	I1120 21:43:12.092099   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:43:12.135227   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 21:43:12.173628   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:43:12.207433   46445 provision.go:87] duration metric: took 524.745233ms to configureAuth
	I1120 21:43:12.207460   46445 buildroot.go:189] setting minikube options for container-runtime
	I1120 21:43:12.207669   46445 config.go:182] Loaded profile config "calico-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:12.210735   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.211256   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.211296   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.211481   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.211692   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:12.211712   46445 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:43:12.506763   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:43:12.506811   46445 machine.go:97] duration metric: took 1.219261033s to provisionDockerMachine
	I1120 21:43:12.506821   46445 client.go:176] duration metric: took 20.942020805s to LocalClient.Create
	I1120 21:43:12.506839   46445 start.go:167] duration metric: took 20.942078344s to libmachine.API.Create "calico-507207"
	I1120 21:43:12.506864   46445 start.go:293] postStartSetup for "calico-507207" (driver="kvm2")
	I1120 21:43:12.506878   46445 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:43:12.506967   46445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:43:12.511349   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.513474   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.513517   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.513732   46445 sshutil.go:53] new ssh client: &{IP:192.168.83.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/calico-507207/id_rsa Username:docker}
	I1120 21:43:12.606159   46445 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:43:12.615456   46445 info.go:137] Remote host: Buildroot 2025.02
	I1120 21:43:12.615489   46445 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 21:43:12.615583   46445 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 21:43:12.615679   46445 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem -> 77062.pem in /etc/ssl/certs
	I1120 21:43:12.615800   46445 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:43:12.634948   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:12.671513   46445 start.go:296] duration metric: took 164.633307ms for postStartSetup
	I1120 21:43:12.674837   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.675249   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.675273   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.675538   46445 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/config.json ...
	I1120 21:43:12.675740   46445 start.go:128] duration metric: took 21.113021512s to createHost
	I1120 21:43:12.678116   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.678547   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.678570   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.678785   46445 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.679013   46445 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.83.30 22 <nil> <nil>}
	I1120 21:43:12.679025   46445 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 21:43:12.796742   46445 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763674992.745795385
	
	I1120 21:43:12.796768   46445 fix.go:216] guest clock: 1763674992.745795385
	I1120 21:43:12.796778   46445 fix.go:229] Guest: 2025-11-20 21:43:12.745795385 +0000 UTC Remote: 2025-11-20 21:43:12.675753604 +0000 UTC m=+81.281458306 (delta=70.041781ms)
	I1120 21:43:12.796800   46445 fix.go:200] guest clock delta is within tolerance: 70.041781ms
	I1120 21:43:12.796807   46445 start.go:83] releasing machines lock for "calico-507207", held for 21.234251171s
	I1120 21:43:12.800917   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.801469   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.801505   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.802341   46445 ssh_runner.go:195] Run: cat /version.json
	I1120 21:43:12.802445   46445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:43:12.806679   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.806879   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.807236   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.807278   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.807325   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:12.807360   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:12.807569   46445 sshutil.go:53] new ssh client: &{IP:192.168.83.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/calico-507207/id_rsa Username:docker}
	I1120 21:43:12.807776   46445 sshutil.go:53] new ssh client: &{IP:192.168.83.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/calico-507207/id_rsa Username:docker}
	I1120 21:43:12.915023   46445 ssh_runner.go:195] Run: systemctl --version
	I1120 21:43:12.921824   46445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:43:13.100549   46445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:43:13.110151   46445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:43:13.110249   46445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:43:13.134789   46445 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:43:13.134812   46445 start.go:496] detecting cgroup driver to use...
	I1120 21:43:13.134904   46445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:43:13.161030   46445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:43:13.180945   46445 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:43:13.181008   46445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:43:13.201220   46445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:43:13.222316   46445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:43:13.426171   46445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:43:13.697654   46445 docker.go:234] disabling docker service ...
	I1120 21:43:13.697741   46445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:43:13.721062   46445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:43:13.738884   46445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:43:13.966724   46445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:43:14.147820   46445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:43:14.167307   46445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:43:14.193197   46445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:43:14.193268   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.208694   46445 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:43:14.208775   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.222402   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.237785   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.253289   46445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:43:14.267414   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.281271   46445 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.304666   46445 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:14.317792   46445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:43:14.329719   46445 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1120 21:43:14.329790   46445 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1120 21:43:14.360622   46445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:43:14.378779   46445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:14.526554   46445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:43:14.661930   46445 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:43:14.662028   46445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:43:14.668545   46445 start.go:564] Will wait 60s for crictl version
	I1120 21:43:14.668619   46445 ssh_runner.go:195] Run: which crictl
	I1120 21:43:14.674963   46445 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 21:43:14.721030   46445 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 21:43:14.721115   46445 ssh_runner.go:195] Run: crio --version
	I1120 21:43:14.756922   46445 ssh_runner.go:195] Run: crio --version
	I1120 21:43:14.790743   46445 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	W1120 21:43:11.330930   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	W1120 21:43:13.832418   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	I1120 21:43:13.354946   46379 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:43:13.368140   46379 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:43:13.368164   46379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:43:13.397473   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:43:13.768831   46379 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:43:13.768990   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:13.769022   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-507207 minikube.k8s.io/updated_at=2025_11_20T21_43_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=kindnet-507207 minikube.k8s.io/primary=true
	I1120 21:43:13.843035   46379 ops.go:34] apiserver oom_adj: -16
	I1120 21:43:14.040880   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:14.541731   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:15.041599   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:12.801238   47230 out.go:252] * Updating the running kvm2 "pause-763370" VM ...
	I1120 21:43:12.801277   47230 machine.go:94] provisionDockerMachine start ...
	I1120 21:43:12.805648   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.806282   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:12.806322   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.806537   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.806831   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:12.806866   47230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:43:12.937431   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-763370
	
	I1120 21:43:12.937483   47230 buildroot.go:166] provisioning hostname "pause-763370"
	I1120 21:43:12.941914   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.942439   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:12.942475   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:12.942768   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:12.943104   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:12.943124   47230 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-763370 && echo "pause-763370" | sudo tee /etc/hostname
	I1120 21:43:13.087326   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-763370
	
	I1120 21:43:13.090606   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.091218   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.091270   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.091526   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:13.091814   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:13.091839   47230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-763370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-763370/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-763370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:43:13.219050   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:43:13.219095   47230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21923-3793/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-3793/.minikube}
	I1120 21:43:13.219159   47230 buildroot.go:174] setting up certificates
	I1120 21:43:13.219171   47230 provision.go:84] configureAuth start
	I1120 21:43:13.223070   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.223707   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.223744   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226312   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226704   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.226742   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.226930   47230 provision.go:143] copyHostCerts
	I1120 21:43:13.226985   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem, removing ...
	I1120 21:43:13.226998   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem
	I1120 21:43:13.227062   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/key.pem (1675 bytes)
	I1120 21:43:13.227170   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem, removing ...
	I1120 21:43:13.227186   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem
	I1120 21:43:13.227210   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/ca.pem (1082 bytes)
	I1120 21:43:13.227267   47230 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem, removing ...
	I1120 21:43:13.227274   47230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem
	I1120 21:43:13.227293   47230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-3793/.minikube/cert.pem (1123 bytes)
	I1120 21:43:13.227341   47230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem org=jenkins.pause-763370 san=[127.0.0.1 192.168.50.92 localhost minikube pause-763370]
	I1120 21:43:13.394135   47230 provision.go:177] copyRemoteCerts
	I1120 21:43:13.394198   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:43:13.397579   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.398078   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.398103   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.398270   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:13.496052   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:43:13.537847   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1120 21:43:13.591402   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:43:13.631078   47230 provision.go:87] duration metric: took 411.891808ms to configureAuth
	I1120 21:43:13.631111   47230 buildroot.go:189] setting minikube options for container-runtime
	I1120 21:43:13.631393   47230 config.go:182] Loaded profile config "pause-763370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:13.634843   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.635404   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:13.635444   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:13.635679   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:13.636000   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:13.636028   47230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:43:14.794917   46445 main.go:143] libmachine: domain calico-507207 has defined MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:14.795363   46445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:6e:d5", ip: ""} in network mk-calico-507207: {Iface:virbr5 ExpiryTime:2025-11-20 22:43:09 +0000 UTC Type:0 Mac:52:54:00:8b:6e:d5 Iaid: IPaddr:192.168.83.30 Prefix:24 Hostname:calico-507207 Clientid:01:52:54:00:8b:6e:d5}
	I1120 21:43:14.795387   46445 main.go:143] libmachine: domain calico-507207 has defined IP address 192.168.83.30 and MAC address 52:54:00:8b:6e:d5 in network mk-calico-507207
	I1120 21:43:14.795610   46445 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1120 21:43:14.800716   46445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:43:14.817405   46445 kubeadm.go:884] updating cluster {Name:calico-507207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:calico-507207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.83.30 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:43:14.817510   46445 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:43:14.817556   46445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:14.852130   46445 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1120 21:43:14.852225   46445 ssh_runner.go:195] Run: which lz4
	I1120 21:43:14.857292   46445 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1120 21:43:14.862790   46445 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1120 21:43:14.862822   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1120 21:43:15.542009   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:16.041532   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:16.542033   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:17.041600   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:17.541753   46379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:43:17.696230   46379 kubeadm.go:1114] duration metric: took 3.927330153s to wait for elevateKubeSystemPrivileges
	I1120 21:43:17.696282   46379 kubeadm.go:403] duration metric: took 17.531680113s to StartCluster
	I1120 21:43:17.696304   46379 settings.go:142] acquiring lock: {Name:mke92973c8f33ef32fe11f7b266adf74cd3ec47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:17.696389   46379 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:43:17.698050   46379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/kubeconfig: {Name:mkab41c603ccf0009d2ed8d29c955ab526fa2eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:17.698314   46379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:43:17.698318   46379 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.86 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:43:17.698399   46379 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:43:17.698492   46379 addons.go:70] Setting storage-provisioner=true in profile "kindnet-507207"
	I1120 21:43:17.698504   46379 config.go:182] Loaded profile config "kindnet-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:43:17.698518   46379 addons.go:239] Setting addon storage-provisioner=true in "kindnet-507207"
	I1120 21:43:17.698535   46379 addons.go:70] Setting default-storageclass=true in profile "kindnet-507207"
	I1120 21:43:17.698548   46379 host.go:66] Checking if "kindnet-507207" exists ...
	I1120 21:43:17.698568   46379 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-507207"
	I1120 21:43:17.700338   46379 out.go:179] * Verifying Kubernetes components...
	I1120 21:43:17.702534   46379 addons.go:239] Setting addon default-storageclass=true in "kindnet-507207"
	I1120 21:43:17.702585   46379 host.go:66] Checking if "kindnet-507207" exists ...
	I1120 21:43:17.704505   46379 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:43:17.704526   46379 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:43:17.705025   46379 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:43:17.705028   46379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:17.706379   46379 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:43:17.706398   46379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:43:17.708032   46379 main.go:143] libmachine: domain kindnet-507207 has defined MAC address 52:54:00:7f:03:9a in network mk-kindnet-507207
	I1120 21:43:17.708623   46379 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:03:9a", ip: ""} in network mk-kindnet-507207: {Iface:virbr4 ExpiryTime:2025-11-20 22:42:48 +0000 UTC Type:0 Mac:52:54:00:7f:03:9a Iaid: IPaddr:192.168.72.86 Prefix:24 Hostname:kindnet-507207 Clientid:01:52:54:00:7f:03:9a}
	I1120 21:43:17.708654   46379 main.go:143] libmachine: domain kindnet-507207 has defined IP address 192.168.72.86 and MAC address 52:54:00:7f:03:9a in network mk-kindnet-507207
	I1120 21:43:17.708886   46379 sshutil.go:53] new ssh client: &{IP:192.168.72.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/kindnet-507207/id_rsa Username:docker}
	I1120 21:43:17.709652   46379 main.go:143] libmachine: domain kindnet-507207 has defined MAC address 52:54:00:7f:03:9a in network mk-kindnet-507207
	I1120 21:43:17.710201   46379 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:03:9a", ip: ""} in network mk-kindnet-507207: {Iface:virbr4 ExpiryTime:2025-11-20 22:42:48 +0000 UTC Type:0 Mac:52:54:00:7f:03:9a Iaid: IPaddr:192.168.72.86 Prefix:24 Hostname:kindnet-507207 Clientid:01:52:54:00:7f:03:9a}
	I1120 21:43:17.710235   46379 main.go:143] libmachine: domain kindnet-507207 has defined IP address 192.168.72.86 and MAC address 52:54:00:7f:03:9a in network mk-kindnet-507207
	I1120 21:43:17.710435   46379 sshutil.go:53] new ssh client: &{IP:192.168.72.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/kindnet-507207/id_rsa Username:docker}
	I1120 21:43:18.009369   46379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:43:18.128996   46379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:43:18.280102   46379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:43:18.468040   46379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:43:18.941708   46379 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1120 21:43:18.942969   46379 node_ready.go:35] waiting up to 15m0s for node "kindnet-507207" to be "Ready" ...
	I1120 21:43:19.450444   46379 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-507207" context rescaled to 1 replicas
	I1120 21:43:19.593315   46379 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125229418s)
	I1120 21:43:19.594793   46379 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1120 21:43:16.329884   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	W1120 21:43:18.333605   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	I1120 21:43:19.595927   46379 addons.go:515] duration metric: took 1.897522319s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1120 21:43:16.726066   46445 crio.go:462] duration metric: took 1.868818577s to copy over tarball
	I1120 21:43:16.726163   46445 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1120 21:43:18.628353   46445 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.902149532s)
	I1120 21:43:18.628395   46445 crio.go:469] duration metric: took 1.902293503s to extract the tarball
	I1120 21:43:18.628406   46445 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1120 21:43:18.682597   46445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:18.739529   46445 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:43:18.739560   46445 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:43:18.739572   46445 kubeadm.go:935] updating node { 192.168.83.30 8443 v1.34.1 crio true true} ...
	I1120 21:43:18.739687   46445 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-507207 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-507207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1120 21:43:18.739775   46445 ssh_runner.go:195] Run: crio config
	I1120 21:43:18.797735   46445 cni.go:84] Creating CNI manager for "calico"
	I1120 21:43:18.797780   46445 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:43:18.797810   46445 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.30 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-507207 NodeName:calico-507207 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:43:18.798018   46445 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-507207"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:43:18.798106   46445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:43:18.812415   46445 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:43:18.812506   46445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:43:18.826043   46445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1120 21:43:18.858191   46445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:43:18.884370   46445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1120 21:43:18.914663   46445 ssh_runner.go:195] Run: grep 192.168.83.30	control-plane.minikube.internal$ /etc/hosts
	I1120 21:43:18.921359   46445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:43:18.943614   46445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:19.158594   46445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:43:19.201422   46445 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207 for IP: 192.168.83.30
	I1120 21:43:19.201453   46445 certs.go:195] generating shared ca certs ...
	I1120 21:43:19.201491   46445 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.201668   46445 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 21:43:19.201730   46445 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 21:43:19.201741   46445 certs.go:257] generating profile certs ...
	I1120 21:43:19.201829   46445 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.key
	I1120 21:43:19.201879   46445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt with IP's: []
	I1120 21:43:19.290057   46445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt ...
	I1120 21:43:19.290090   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: {Name:mk8f923f848c03ed741c45e7ba45e75e4c375b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.290300   46445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.key ...
	I1120 21:43:19.290316   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.key: {Name:mk6275d0f8481b4dae6a07659889c325cb1e0d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.290444   46445 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key.9e7bb764
	I1120 21:43:19.290472   46445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt.9e7bb764 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.30]
	I1120 21:43:19.490899   46445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt.9e7bb764 ...
	I1120 21:43:19.490932   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt.9e7bb764: {Name:mk1036f22022b13f534f27f2e23460d522660a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.491137   46445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key.9e7bb764 ...
	I1120 21:43:19.491156   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key.9e7bb764: {Name:mk187c6c1f2ecdf12600aed5e9c8ec401ed7e45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.491268   46445 certs.go:382] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt.9e7bb764 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt
	I1120 21:43:19.491372   46445 certs.go:386] copying /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key.9e7bb764 -> /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key
	I1120 21:43:19.491454   46445 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.key
	I1120 21:43:19.491473   46445 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.crt with IP's: []
	I1120 21:43:19.721629   46445 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.crt ...
	I1120 21:43:19.721665   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.crt: {Name:mk88d9c0eb4cdcce476b3c18d0d6d3c109e71e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.721897   46445 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.key ...
	I1120 21:43:19.721924   46445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.key: {Name:mkeec0206f16d1ef05d4ed4ea26f2c23aa44a015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:19.722143   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem (1338 bytes)
	W1120 21:43:19.722182   46445 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706_empty.pem, impossibly tiny 0 bytes
	I1120 21:43:19.722191   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 21:43:19.722213   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:43:19.722234   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:43:19.722255   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 21:43:19.722292   46445 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:19.722827   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:43:19.759961   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:43:19.800903   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:43:19.841034   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:43:19.879179   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 21:43:19.933934   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:43:19.977243   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:43:20.013923   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:43:20.050520   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:43:20.092091   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem --> /usr/share/ca-certificates/7706.pem (1338 bytes)
	I1120 21:43:20.129917   46445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /usr/share/ca-certificates/77062.pem (1708 bytes)
	I1120 21:43:20.171308   46445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:43:20.200414   46445 ssh_runner.go:195] Run: openssl version
	I1120 21:43:20.208330   46445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77062.pem
	I1120 21:43:20.221348   46445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77062.pem /etc/ssl/certs/77062.pem
	I1120 21:43:20.236649   46445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77062.pem
	I1120 21:43:20.243285   46445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:36 /usr/share/ca-certificates/77062.pem
	I1120 21:43:20.243360   46445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77062.pem
	I1120 21:43:20.251893   46445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:43:20.267188   46445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/77062.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:43:20.282030   46445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:20.297483   46445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:43:20.314463   46445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:20.323449   46445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:20.323537   46445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:20.334177   46445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:43:20.352927   46445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:43:20.372269   46445 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7706.pem
	I1120 21:43:20.389645   46445 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7706.pem /etc/ssl/certs/7706.pem
	I1120 21:43:20.405875   46445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7706.pem
	I1120 21:43:20.413878   46445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:36 /usr/share/ca-certificates/7706.pem
	I1120 21:43:20.413962   46445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7706.pem
	I1120 21:43:20.423718   46445 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:43:20.437863   46445 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7706.pem /etc/ssl/certs/51391683.0
	I1120 21:43:20.452939   46445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:43:20.459111   46445 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:43:20.459173   46445 kubeadm.go:401] StartCluster: {Name:calico-507207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:calico-507207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.83.30 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:20.459256   46445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:43:20.459314   46445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:43:20.510188   46445 cri.go:89] found id: ""
	I1120 21:43:20.510282   46445 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:43:20.528031   46445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:43:20.543218   46445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:43:20.557314   46445 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:43:20.557336   46445 kubeadm.go:158] found existing configuration files:
	
	I1120 21:43:20.557393   46445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:43:20.573323   46445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:43:20.573396   46445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:43:20.586687   46445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:43:20.607619   46445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:43:20.607693   46445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:43:20.622615   46445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:43:20.636571   46445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:43:20.636645   46445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:43:20.655844   46445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:43:20.671216   46445 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:43:20.671292   46445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:43:20.688117   46445 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1120 21:43:20.753280   46445 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:43:20.753383   46445 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:43:20.888634   46445 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:43:20.888794   46445 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:43:20.888973   46445 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:43:20.911020   46445 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:43:19.366295   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:43:19.366321   47230 machine.go:97] duration metric: took 6.565033306s to provisionDockerMachine
	I1120 21:43:19.366334   47230 start.go:293] postStartSetup for "pause-763370" (driver="kvm2")
	I1120 21:43:19.366346   47230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:43:19.366430   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:43:19.370029   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.370516   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.370543   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.370714   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.467003   47230 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:43:19.473573   47230 info.go:137] Remote host: Buildroot 2025.02
	I1120 21:43:19.473609   47230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/addons for local assets ...
	I1120 21:43:19.473701   47230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-3793/.minikube/files for local assets ...
	I1120 21:43:19.473831   47230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem -> 77062.pem in /etc/ssl/certs
	I1120 21:43:19.474040   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:43:19.494153   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:19.535592   47230 start.go:296] duration metric: took 169.240571ms for postStartSetup
	I1120 21:43:19.535640   47230 fix.go:56] duration metric: took 6.738612108s for fixHost
	I1120 21:43:19.539008   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.539485   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.539520   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.539742   47230 main.go:143] libmachine: Using SSH client type: native
	I1120 21:43:19.540068   47230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I1120 21:43:19.540082   47230 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1120 21:43:19.661922   47230 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763674999.654051230
	
	I1120 21:43:19.661948   47230 fix.go:216] guest clock: 1763674999.654051230
	I1120 21:43:19.661972   47230 fix.go:229] Guest: 2025-11-20 21:43:19.65405123 +0000 UTC Remote: 2025-11-20 21:43:19.535646072 +0000 UTC m=+8.619190318 (delta=118.405158ms)
	I1120 21:43:19.661993   47230 fix.go:200] guest clock delta is within tolerance: 118.405158ms
	I1120 21:43:19.661999   47230 start.go:83] releasing machines lock for "pause-763370", held for 6.86502006s
	I1120 21:43:19.665305   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.665827   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.665871   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.666470   47230 ssh_runner.go:195] Run: cat /version.json
	I1120 21:43:19.666517   47230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:43:19.670623   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.670663   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671158   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.671199   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671213   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:19.671246   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:19.671589   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.671750   47230 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/pause-763370/id_rsa Username:docker}
	I1120 21:43:19.757220   47230 ssh_runner.go:195] Run: systemctl --version
	I1120 21:43:19.791329   47230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:43:19.958183   47230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:43:19.972171   47230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:43:19.972256   47230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:43:19.986821   47230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:43:19.986878   47230 start.go:496] detecting cgroup driver to use...
	I1120 21:43:19.986960   47230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:43:20.020155   47230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:43:20.042276   47230 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:43:20.042351   47230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:43:20.075095   47230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:43:20.096418   47230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:43:20.313659   47230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:43:20.553252   47230 docker.go:234] disabling docker service ...
	I1120 21:43:20.553344   47230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:43:20.586764   47230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:43:20.604836   47230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:43:20.829605   47230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:43:20.944037   46445 out.go:252]   - Generating certificates and keys ...
	I1120 21:43:20.944143   46445 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:43:20.944269   46445 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:43:21.449215   46445 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:43:21.028720   47230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:43:21.047746   47230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:43:21.075961   47230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:43:21.076021   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.091397   47230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1120 21:43:21.091494   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.105351   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.120611   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.139624   47230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:43:21.157792   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.172905   47230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.186929   47230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:43:21.202707   47230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:43:21.217837   47230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:43:21.232547   47230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:21.437520   47230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:43:22.024669   47230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:43:22.024747   47230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:43:22.032424   47230 start.go:564] Will wait 60s for crictl version
	I1120 21:43:22.032500   47230 ssh_runner.go:195] Run: which crictl
	I1120 21:43:22.037409   47230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1120 21:43:22.077081   47230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1120 21:43:22.077174   47230 ssh_runner.go:195] Run: crio --version
	I1120 21:43:22.112251   47230 ssh_runner.go:195] Run: crio --version
	I1120 21:43:22.147198   47230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1120 21:43:21.575325   46445 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:43:21.724903   46445 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:43:21.788174   46445 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:43:22.007441   46445 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:43:22.007627   46445 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-507207 localhost] and IPs [192.168.83.30 127.0.0.1 ::1]
	I1120 21:43:22.231958   46445 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:43:22.232147   46445 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-507207 localhost] and IPs [192.168.83.30 127.0.0.1 ::1]
	I1120 21:43:22.269672   46445 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:43:23.092368   46445 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:43:23.833567   46445 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:43:23.833917   46445 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:43:23.950391   46445 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:43:24.173137   46445 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:43:24.368804   46445 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:43:24.505146   46445 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:43:24.600056   46445 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:43:24.601314   46445 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:43:24.603402   46445 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1120 21:43:20.832408   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	W1120 21:43:23.332755   46137 pod_ready.go:104] pod "coredns-66bc5c9577-qdpx5" is not "Ready", error: <nil>
	W1120 21:43:21.308384   46379 node_ready.go:57] node "kindnet-507207" has "Ready":"False" status (will retry)
	W1120 21:43:23.447042   46379 node_ready.go:57] node "kindnet-507207" has "Ready":"False" status (will retry)
	W1120 21:43:25.447636   46379 node_ready.go:57] node "kindnet-507207" has "Ready":"False" status (will retry)
	I1120 21:43:22.151588   47230 main.go:143] libmachine: domain pause-763370 has defined MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:22.152255   47230 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:9e:c8", ip: ""} in network mk-pause-763370: {Iface:virbr2 ExpiryTime:2025-11-20 22:42:05 +0000 UTC Type:0 Mac:52:54:00:b2:9e:c8 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:pause-763370 Clientid:01:52:54:00:b2:9e:c8}
	I1120 21:43:22.152291   47230 main.go:143] libmachine: domain pause-763370 has defined IP address 192.168.50.92 and MAC address 52:54:00:b2:9e:c8 in network mk-pause-763370
	I1120 21:43:22.152619   47230 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1120 21:43:22.157982   47230 kubeadm.go:884] updating cluster {Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:43:22.158171   47230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:43:22.158223   47230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:22.211591   47230 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:43:22.211614   47230 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:43:22.211680   47230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:43:22.247690   47230 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:43:22.247712   47230 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:43:22.247719   47230 kubeadm.go:935] updating node { 192.168.50.92 8443 v1.34.1 crio true true} ...
	I1120 21:43:22.247814   47230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-763370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:43:22.247893   47230 ssh_runner.go:195] Run: crio config
	I1120 21:43:22.302915   47230 cni.go:84] Creating CNI manager for ""
	I1120 21:43:22.302938   47230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 21:43:22.302952   47230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:43:22.302972   47230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.92 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-763370 NodeName:pause-763370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:43:22.303099   47230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-763370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.92"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.92"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:43:22.303169   47230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:43:22.318421   47230 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:43:22.318491   47230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:43:22.332429   47230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1120 21:43:22.355454   47230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:43:22.381174   47230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1120 21:43:22.404131   47230 ssh_runner.go:195] Run: grep 192.168.50.92	control-plane.minikube.internal$ /etc/hosts
	I1120 21:43:22.409397   47230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:43:22.580909   47230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:43:22.602545   47230 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370 for IP: 192.168.50.92
	I1120 21:43:22.602570   47230 certs.go:195] generating shared ca certs ...
	I1120 21:43:22.602590   47230 certs.go:227] acquiring lock for ca certs: {Name:mkade057eef8dd703114a73d794ac155befa5ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:43:22.602754   47230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key
	I1120 21:43:22.602793   47230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key
	I1120 21:43:22.602800   47230 certs.go:257] generating profile certs ...
	I1120 21:43:22.602905   47230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/client.key
	I1120 21:43:22.602969   47230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.key.82ea8a75
	I1120 21:43:22.603023   47230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.key
	I1120 21:43:22.603136   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem (1338 bytes)
	W1120 21:43:22.603166   47230 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706_empty.pem, impossibly tiny 0 bytes
	I1120 21:43:22.603175   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca-key.pem (1675 bytes)
	I1120 21:43:22.603211   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:43:22.603234   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:43:22.603265   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/certs/key.pem (1675 bytes)
	I1120 21:43:22.603302   47230 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem (1708 bytes)
	I1120 21:43:22.603944   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:43:22.639643   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:43:22.678981   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:43:22.716825   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:43:22.830036   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 21:43:22.933831   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:43:23.049586   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:43:23.112614   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/pause-763370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:43:23.205732   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/ssl/certs/77062.pem --> /usr/share/ca-certificates/77062.pem (1708 bytes)
	I1120 21:43:23.322710   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:43:23.365437   47230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-3793/.minikube/certs/7706.pem --> /usr/share/ca-certificates/7706.pem (1338 bytes)
	I1120 21:43:23.443712   47230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:43:23.493912   47230 ssh_runner.go:195] Run: openssl version
	I1120 21:43:23.506870   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.531430   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7706.pem /etc/ssl/certs/7706.pem
	I1120 21:43:23.557455   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.571438   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:36 /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.571513   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7706.pem
	I1120 21:43:23.587455   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:43:23.611795   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.643085   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/77062.pem /etc/ssl/certs/77062.pem
	I1120 21:43:23.672417   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.686036   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:36 /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.686104   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77062.pem
	I1120 21:43:23.708741   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:43:23.735870   47230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.826259   47230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:43:23.891448   47230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.907688   47230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.907794   47230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:43:23.928181   47230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:43:23.982840   47230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:43:24.001324   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:43:24.022305   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:43:24.044730   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:43:24.059730   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:43:24.073983   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:43:24.087235   47230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:43:24.102241   47230 kubeadm.go:401] StartCluster: {Name:pause-763370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-763370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:43:24.102398   47230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:43:24.102462   47230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:43:24.205748   47230 cri.go:89] found id: "a8f530e568c757fdc6cf379f3aff3799f7ac9edc34796d92623ebca90bef7915"
	I1120 21:43:24.205788   47230 cri.go:89] found id: "83cd96810d2c877bdfa126a89328d7a35eb4be3fd8de4b2ed42c13193144713a"
	I1120 21:43:24.205794   47230 cri.go:89] found id: "8701c5fc6a886422420230e3fbea92c7d4aea86245ec3cc485da7f1aaae6a039"
	I1120 21:43:24.205799   47230 cri.go:89] found id: "8c5ac4300dcc187b93dcd172fa7be5d678471e2a1c514481aea543821e1648ed"
	I1120 21:43:24.205803   47230 cri.go:89] found id: "1d0718306f927d8437ba4a6e5d4e7118090ac488ca0a67da151e8d1900b4c8f8"
	I1120 21:43:24.205808   47230 cri.go:89] found id: "a4bab4186846f86bd976fb6b744cc894bcb7ba8a3c2aa0c4280a557962b79508"
	I1120 21:43:24.205812   47230 cri.go:89] found id: "f2f8984f6605cc119fd8d6509f611adccd97b1f8a92d063da3ba9b481c5f625a"
	I1120 21:43:24.205817   47230 cri.go:89] found id: "47912ef37c7f6bfb5e512cb8ba68e8722a5c82d599dac78f2a2efb6798d250e9"
	I1120 21:43:24.205820   47230 cri.go:89] found id: "2cde9ae8cae4f937f3ada12b4822797c5e72d4e0400b23ae5448cefd1047efaf"
	I1120 21:43:24.205828   47230 cri.go:89] found id: "22ef1f3a8c8b7f3776fba696f1e7097f4b1028136e3b96a6f7efae2623a45d66"
	I1120 21:43:24.205832   47230 cri.go:89] found id: ""
	I1120 21:43:24.205940   47230 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-763370 -n pause-763370
helpers_test.go:269: (dbg) Run:  kubectl --context pause-763370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (45.68s)

                                                
                                    

Test pass (290/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.62
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.23
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.64
22 TestOffline 106.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 210.93
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.59
36 TestAddons/parallel/RegistryCreds 0.7
38 TestAddons/parallel/InspektorGadget 11.76
39 TestAddons/parallel/MetricsServer 5.79
42 TestAddons/parallel/Headlamp 79.34
43 TestAddons/parallel/CloudSpanner 6.57
45 TestAddons/parallel/NvidiaDevicePlugin 6.53
48 TestAddons/StoppedEnableDisable 81.71
49 TestCertOptions 45.41
50 TestCertExpiration 298.13
52 TestForceSystemdFlag 79.41
53 TestForceSystemdEnv 90.07
58 TestErrorSpam/setup 42.04
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.65
61 TestErrorSpam/pause 1.52
62 TestErrorSpam/unpause 1.78
63 TestErrorSpam/stop 5.4
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 55.07
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.64
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
75 TestFunctional/serial/CacheCmd/cache/add_local 1.11
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 58.8
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.3
86 TestFunctional/serial/LogsFileCmd 1.38
87 TestFunctional/serial/InvalidService 4.18
89 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DryRun 0.21
92 TestFunctional/parallel/InternationalLanguage 0.11
93 TestFunctional/parallel/StatusCmd 0.65
98 TestFunctional/parallel/AddonsCmd 0.18
101 TestFunctional/parallel/SSHCmd 0.32
102 TestFunctional/parallel/CpCmd 1.05
104 TestFunctional/parallel/FileSync 0.17
105 TestFunctional/parallel/CertSync 1.05
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
113 TestFunctional/parallel/License 0.27
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
116 TestFunctional/parallel/ProfileCmd/profile_list 0.29
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
118 TestFunctional/parallel/MountCmd/any-port 66.78
119 TestFunctional/parallel/MountCmd/specific-port 1.22
120 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
125 TestFunctional/parallel/ImageCommands/ImageBuild 2.86
126 TestFunctional/parallel/ImageCommands/Setup 0.44
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.28
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
136 TestFunctional/parallel/Version/short 0.06
137 TestFunctional/parallel/Version/components 0.46
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
148 TestFunctional/parallel/ServiceCmd/List 1.22
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.23
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 221.73
161 TestMultiControlPlane/serial/DeployApp 6.72
162 TestMultiControlPlane/serial/PingHostFromPods 1.36
163 TestMultiControlPlane/serial/AddWorkerNode 44.68
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.69
166 TestMultiControlPlane/serial/CopyFile 10.87
167 TestMultiControlPlane/serial/StopSecondaryNode 75.26
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.53
169 TestMultiControlPlane/serial/RestartSecondaryNode 43.79
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 306.36
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.36
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
174 TestMultiControlPlane/serial/StopCluster 167.56
175 TestMultiControlPlane/serial/RestartCluster 98.33
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
177 TestMultiControlPlane/serial/AddSecondaryNode 76.15
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.71
183 TestJSONOutput/start/Command 83.75
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.05
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 85.5
215 TestMountStart/serial/StartWithMountFirst 21.82
216 TestMountStart/serial/VerifyMountFirst 0.31
217 TestMountStart/serial/StartWithMountSecond 20.78
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.68
220 TestMountStart/serial/VerifyMountPostDelete 0.31
221 TestMountStart/serial/Stop 1.33
222 TestMountStart/serial/RestartStopped 18.22
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 134.6
227 TestMultiNode/serial/DeployApp2Nodes 5.31
228 TestMultiNode/serial/PingHostFrom2Pods 0.88
229 TestMultiNode/serial/AddNode 45.2
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.46
232 TestMultiNode/serial/CopyFile 6.02
233 TestMultiNode/serial/StopNode 2.31
234 TestMultiNode/serial/StartAfterStop 43.98
235 TestMultiNode/serial/RestartKeepsNodes 301.17
236 TestMultiNode/serial/DeleteNode 2.69
237 TestMultiNode/serial/StopMultiNode 156.09
238 TestMultiNode/serial/RestartMultiNode 126.98
239 TestMultiNode/serial/ValidateNameConflict 43.57
246 TestScheduledStopUnix 111.88
250 TestRunningBinaryUpgrade 109.54
252 TestKubernetesUpgrade 187.85
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 108.11
257 TestNoKubernetes/serial/StartWithStopK8s 19.2
258 TestNoKubernetes/serial/Start 40.02
259 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
261 TestNoKubernetes/serial/ProfileList 1.11
262 TestNoKubernetes/serial/Stop 1.47
263 TestNoKubernetes/serial/StartNoArgs 46.63
264 TestStoppedBinaryUpgrade/Setup 0.52
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
266 TestStoppedBinaryUpgrade/Upgrade 114.58
274 TestNetworkPlugins/group/false 3.54
278 TestISOImage/Setup 36.29
287 TestPause/serial/Start 105.78
288 TestNetworkPlugins/group/auto/Start 114.02
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
290 TestNetworkPlugins/group/kindnet/Start 103.87
292 TestISOImage/Binaries/crictl 0.18
293 TestISOImage/Binaries/curl 0.2
294 TestISOImage/Binaries/docker 0.21
295 TestISOImage/Binaries/git 0.2
296 TestISOImage/Binaries/iptables 0.21
297 TestISOImage/Binaries/podman 0.2
298 TestISOImage/Binaries/rsync 0.21
299 TestISOImage/Binaries/socat 0.17
300 TestISOImage/Binaries/wget 0.2
301 TestISOImage/Binaries/VBoxControl 0.22
302 TestISOImage/Binaries/VBoxService 0.21
303 TestNetworkPlugins/group/calico/Start 131.24
305 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
306 TestNetworkPlugins/group/auto/KubeletFlags 0.2
307 TestNetworkPlugins/group/auto/NetCatPod 11.3
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
309 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
310 TestNetworkPlugins/group/auto/DNS 0.19
311 TestNetworkPlugins/group/auto/Localhost 0.14
312 TestNetworkPlugins/group/auto/HairPin 0.14
313 TestNetworkPlugins/group/kindnet/DNS 0.23
314 TestNetworkPlugins/group/kindnet/Localhost 0.19
315 TestNetworkPlugins/group/kindnet/HairPin 0.18
316 TestNetworkPlugins/group/custom-flannel/Start 73.28
317 TestNetworkPlugins/group/enable-default-cni/Start 109.37
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/flannel/Start 107.94
320 TestNetworkPlugins/group/calico/KubeletFlags 0.19
321 TestNetworkPlugins/group/calico/NetCatPod 11.29
322 TestNetworkPlugins/group/calico/DNS 0.19
323 TestNetworkPlugins/group/calico/Localhost 0.15
324 TestNetworkPlugins/group/calico/HairPin 0.15
325 TestNetworkPlugins/group/bridge/Start 113.62
326 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
327 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.4
328 TestNetworkPlugins/group/custom-flannel/DNS 0.19
329 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
330 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
332 TestStartStop/group/old-k8s-version/serial/FirstStart 102.33
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.32
335 TestNetworkPlugins/group/flannel/ControllerPod 6.01
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
339 TestNetworkPlugins/group/flannel/NetCatPod 10.46
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
341 TestNetworkPlugins/group/flannel/DNS 0.19
342 TestNetworkPlugins/group/flannel/Localhost 0.16
343 TestNetworkPlugins/group/flannel/HairPin 0.17
345 TestStartStop/group/no-preload/serial/FirstStart 114.55
347 TestStartStop/group/embed-certs/serial/FirstStart 97.51
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
349 TestNetworkPlugins/group/bridge/NetCatPod 10.29
350 TestNetworkPlugins/group/bridge/DNS 0.2
351 TestNetworkPlugins/group/bridge/Localhost 0.17
352 TestNetworkPlugins/group/bridge/HairPin 0.17
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 93.65
355 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.44
357 TestStartStop/group/old-k8s-version/serial/Stop 82.96
358 TestStartStop/group/embed-certs/serial/DeployApp 9.28
359 TestStartStop/group/no-preload/serial/DeployApp 9.29
360 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
361 TestStartStop/group/embed-certs/serial/Stop 85.47
362 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
363 TestStartStop/group/no-preload/serial/Stop 90.13
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
366 TestStartStop/group/default-k8s-diff-port/serial/Stop 71.81
367 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
368 TestStartStop/group/old-k8s-version/serial/SecondStart 45.83
369 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
370 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
371 TestStartStop/group/embed-certs/serial/SecondStart 50.37
372 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/no-preload/serial/SecondStart 70.88
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 75.61
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/old-k8s-version/serial/Pause 3.37
380 TestStartStop/group/newest-cni/serial/FirstStart 85.12
381 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
383 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
384 TestStartStop/group/embed-certs/serial/Pause 3.55
386 TestISOImage/PersistentMounts//data 0.23
387 TestISOImage/PersistentMounts//var/lib/docker 0.22
388 TestISOImage/PersistentMounts//var/lib/cni 0.22
389 TestISOImage/PersistentMounts//var/lib/kubelet 0.22
390 TestISOImage/PersistentMounts//var/lib/minikube 0.22
391 TestISOImage/PersistentMounts//var/lib/toolbox 0.21
392 TestISOImage/PersistentMounts//var/lib/boot2docker 0.23
393 TestISOImage/VersionJSON 0.21
394 TestISOImage/eBPFSupport 0.21
395 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
397 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
398 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
399 TestStartStop/group/no-preload/serial/Pause 2.62
400 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.07
401 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
402 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.74
403 TestStartStop/group/newest-cni/serial/DeployApp 0
404 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
405 TestStartStop/group/newest-cni/serial/Stop 10.57
406 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
407 TestStartStop/group/newest-cni/serial/SecondStart 36.96
408 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
411 TestStartStop/group/newest-cni/serial/Pause 2.31
x
+
TestDownloadOnly/v1.28.0/json-events (8.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-838975 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-838975 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.618477814s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1120 20:20:59.768752    7706 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1120 20:20:59.768844    7706 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-838975
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-838975: exit status 85 (72.771369ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-838975 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:20:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:20:51.202619    7717 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:20:51.202900    7717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:20:51.202911    7717 out.go:374] Setting ErrFile to fd 2...
	I1120 20:20:51.202915    7717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:20:51.203112    7717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	W1120 20:20:51.203231    7717 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21923-3793/.minikube/config/config.json: open /home/jenkins/minikube-integration/21923-3793/.minikube/config/config.json: no such file or directory
	I1120 20:20:51.203708    7717 out.go:368] Setting JSON to true
	I1120 20:20:51.204587    7717 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":201,"bootTime":1763669850,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:20:51.204677    7717 start.go:143] virtualization: kvm guest
	I1120 20:20:51.206966    7717 out.go:99] [download-only-838975] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1120 20:20:51.207086    7717 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball: no such file or directory
	I1120 20:20:51.207103    7717 notify.go:221] Checking for updates...
	I1120 20:20:51.208273    7717 out.go:171] MINIKUBE_LOCATION=21923
	I1120 20:20:51.209569    7717 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:20:51.210964    7717 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:20:51.212115    7717 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:20:51.213263    7717 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1120 20:20:51.215360    7717 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 20:20:51.215573    7717 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:20:51.721933    7717 out.go:99] Using the kvm2 driver based on user configuration
	I1120 20:20:51.721969    7717 start.go:309] selected driver: kvm2
	I1120 20:20:51.721975    7717 start.go:930] validating driver "kvm2" against <nil>
	I1120 20:20:51.722391    7717 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:20:51.723155    7717 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1120 20:20:51.723366    7717 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 20:20:51.723400    7717 cni.go:84] Creating CNI manager for ""
	I1120 20:20:51.723461    7717 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1120 20:20:51.723474    7717 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1120 20:20:51.723525    7717 start.go:353] cluster config:
	{Name:download-only-838975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-838975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:20:51.723752    7717 iso.go:125] acquiring lock: {Name:mk3c766c9e1fe11496377c94b3a13c2b186bdb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:20:51.725374    7717 out.go:99] Downloading VM boot image ...
	I1120 20:20:51.725412    7717 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21923-3793/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1120 20:20:55.290071    7717 out.go:99] Starting "download-only-838975" primary control-plane node in "download-only-838975" cluster
	I1120 20:20:55.290105    7717 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 20:20:55.306065    7717 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1120 20:20:55.306095    7717 cache.go:65] Caching tarball of preloaded images
	I1120 20:20:55.306294    7717 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 20:20:55.308185    7717 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1120 20:20:55.308212    7717 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1120 20:20:55.330044    7717 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1120 20:20:55.330193    7717 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-838975 host does not exist
	  To start a cluster, run: "minikube start -p download-only-838975"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-838975
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-948147 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-948147 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.228795459s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1120 20:21:03.377162    7706 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1120 20:21:03.377201    7706 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-948147
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-948147: exit status 85 (74.929059ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-838975 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:20 UTC │ 20 Nov 25 20:20 UTC │
	│ delete  │ -p download-only-838975                                                                                                                                                 │ download-only-838975 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │ 20 Nov 25 20:21 UTC │
	│ start   │ -o=json --download-only -p download-only-948147 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-948147 │ jenkins │ v1.37.0 │ 20 Nov 25 20:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:21:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:21:00.199391    7931 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:21:00.199658    7931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:00.199668    7931 out.go:374] Setting ErrFile to fd 2...
	I1120 20:21:00.199673    7931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:21:00.199844    7931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:21:00.200326    7931 out.go:368] Setting JSON to true
	I1120 20:21:00.201178    7931 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":210,"bootTime":1763669850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:21:00.201275    7931 start.go:143] virtualization: kvm guest
	I1120 20:21:00.203008    7931 out.go:99] [download-only-948147] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:21:00.203185    7931 notify.go:221] Checking for updates...
	I1120 20:21:00.204553    7931 out.go:171] MINIKUBE_LOCATION=21923
	I1120 20:21:00.205997    7931 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:21:00.207429    7931 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:21:00.208632    7931 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:21:00.210001    7931 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-948147 host does not exist
	  To start a cluster, run: "minikube start -p download-only-948147"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-948147
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1120 20:21:04.048361    7706 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-717684 --alsologtostderr --binary-mirror http://127.0.0.1:46607 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-717684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-717684
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (106.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-618223 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-618223 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m45.89762532s)
helpers_test.go:175: Cleaning up "offline-crio-618223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-618223
--- PASS: TestOffline (106.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-947553
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-947553: exit status 85 (62.522386ms)

                                                
                                                
-- stdout --
	* Profile "addons-947553" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-947553"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-947553
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-947553: exit status 85 (61.667822ms)

                                                
                                                
-- stdout --
	* Profile "addons-947553" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-947553"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (210.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-947553 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-947553 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m30.929175796s)
--- PASS: TestAddons/Setup (210.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-947553 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-947553 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.59s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-947553 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-947553 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [709b0bdb-dd50-4d23-b6f1-1f659e2347cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [709b0bdb-dd50-4d23-b6f1-1f659e2347cf] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.005391749s
addons_test.go:694: (dbg) Run:  kubectl --context addons-947553 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-947553 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-947553 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.59s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.361042ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-947553
addons_test.go:332: (dbg) Run:  kubectl --context addons-947553 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-dnx8n" [144048d7-70cb-4183-850c-037db831f39a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004947003s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 addons disable inspektor-gadget --alsologtostderr -v=1: (5.757745385s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.224349ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-kmwzl" [e97af059-59bf-41e6-8ddc-e4c61f85b89e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00439564s
addons_test.go:463: (dbg) Run:  kubectl --context addons-947553 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (79.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-947553 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-947553 --alsologtostderr -v=1: (1.148555483s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-jclw2" [7a09cdd9-c227-4427-a2b9-b5f32de97ab7] Pending
helpers_test.go:352: "headlamp-6945c6f4d-jclw2" [7a09cdd9-c227-4427-a2b9-b5f32de97ab7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-jclw2" [7a09cdd9-c227-4427-a2b9-b5f32de97ab7] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-jclw2" [7a09cdd9-c227-4427-a2b9-b5f32de97ab7] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 1m12.067382734s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-947553 addons disable headlamp --alsologtostderr -v=1: (6.121059229s)
--- PASS: TestAddons/parallel/Headlamp (79.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-7n4rg" [68978fe7-8675-456d-9904-948b4c518083] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003870482s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-g5s2s" [9a1503ad-8dde-46df-a547-36c4aeb292d9] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005058549s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (81.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-947553
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-947553: (1m21.506582129s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-947553
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-947553
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-947553
--- PASS: TestAddons/StoppedEnableDisable (81.71s)

                                                
                                    
x
+
TestCertOptions (45.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-820435 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-820435 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.107145204s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-820435 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-820435 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-820435 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-820435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-820435
--- PASS: TestCertOptions (45.41s)

                                                
                                    
x
+
TestCertExpiration (298.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-925075 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-925075 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (42.193681213s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-925075 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-925075 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m15.01602198s)
helpers_test.go:175: Cleaning up "cert-expiration-925075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-925075
--- PASS: TestCertExpiration (298.13s)

                                                
                                    
x
+
TestForceSystemdFlag (79.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-463882 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-463882 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.413886386s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-463882 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-463882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-463882
--- PASS: TestForceSystemdFlag (79.41s)

                                                
                                    
x
+
TestForceSystemdEnv (90.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-788153 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-788153 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m29.082558952s)
helpers_test.go:175: Cleaning up "force-systemd-env-788153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-788153
--- PASS: TestForceSystemdEnv (90.07s)

                                                
                                    
x
+
TestErrorSpam/setup (42.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-673989 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-673989 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-673989 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-673989 --driver=kvm2  --container-runtime=crio: (42.043357907s)
--- PASS: TestErrorSpam/setup (42.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (5.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 stop: (2.06856685s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 stop: (1.887681679s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-673989 --log_dir /tmp/nospam-673989 stop: (1.444266361s)
--- PASS: TestErrorSpam/stop (5.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21923-3793/.minikube/files/etc/test/nested/copy/7706/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933412 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-933412 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.071025663s)
--- PASS: TestFunctional/serial/StartWithProxy (55.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1120 20:37:29.360887    7706 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933412 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-933412 --alsologtostderr -v=8: (38.634487879s)
functional_test.go:678: soft start took 38.63522085s for "functional-933412" cluster.
I1120 20:38:07.995782    7706 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (38.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-933412 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 cache add registry.k8s.io/pause:3.1: (1.021957247s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 cache add registry.k8s.io/pause:3.3: (1.082796745s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 cache add registry.k8s.io/pause:latest: (1.095244875s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-933412 /tmp/TestFunctionalserialCacheCmdcacheadd_local2673095348/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cache add minikube-local-cache-test:functional-933412
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cache delete minikube-local-cache-test:functional-933412
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-933412
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.758151ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 kubectl -- --context functional-933412 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-933412 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933412 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-933412 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.801583627s)
functional_test.go:776: restart took 58.80172171s for "functional-933412" cluster.
I1120 20:39:13.399031    7706 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (58.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-933412 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 logs: (1.296154408s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 logs --file /tmp/TestFunctionalserialLogsFileCmd1795398443/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 logs --file /tmp/TestFunctionalserialLogsFileCmd1795398443/001/logs.txt: (1.380391018s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-933412 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-933412
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-933412: exit status 115 (222.220229ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.212:30806 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-933412 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 config get cpus: exit status 14 (57.168653ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 config get cpus: exit status 14 (63.303793ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933412 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-933412 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (103.299716ms)

                                                
                                                
-- stdout --
	* [functional-933412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:40:31.879389   18495 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:40:31.879609   18495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.879617   18495 out.go:374] Setting ErrFile to fd 2...
	I1120 20:40:31.879621   18495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.879782   18495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:40:31.880214   18495 out.go:368] Setting JSON to false
	I1120 20:40:31.881050   18495 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1382,"bootTime":1763669850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:40:31.881139   18495 start.go:143] virtualization: kvm guest
	I1120 20:40:31.882793   18495 out.go:179] * [functional-933412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:40:31.883898   18495 notify.go:221] Checking for updates...
	I1120 20:40:31.883924   18495 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:40:31.885104   18495 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:40:31.886563   18495 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:40:31.887779   18495 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:40:31.888803   18495 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:40:31.889789   18495 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:40:31.891273   18495 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:40:31.891678   18495 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:40:31.922016   18495 out.go:179] * Using the kvm2 driver based on existing profile
	I1120 20:40:31.923124   18495 start.go:309] selected driver: kvm2
	I1120 20:40:31.923138   18495 start.go:930] validating driver "kvm2" against &{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:31.923237   18495 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:40:31.925146   18495 out.go:203] 
	W1120 20:40:31.926223   18495 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1120 20:40:31.927280   18495 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933412 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933412 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-933412 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (108.047006ms)

                                                
                                                
-- stdout --
	* [functional-933412] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:40:31.775200   18479 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:40:31.775295   18479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.775303   18479 out.go:374] Setting ErrFile to fd 2...
	I1120 20:40:31.775308   18479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:40:31.775547   18479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 20:40:31.775990   18479 out.go:368] Setting JSON to false
	I1120 20:40:31.776821   18479 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1382,"bootTime":1763669850,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:40:31.776933   18479 start.go:143] virtualization: kvm guest
	I1120 20:40:31.778672   18479 out.go:179] * [functional-933412] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1120 20:40:31.779825   18479 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:40:31.779806   18479 notify.go:221] Checking for updates...
	I1120 20:40:31.781887   18479 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:40:31.783002   18479 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 20:40:31.783979   18479 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 20:40:31.785180   18479 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:40:31.786278   18479 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:40:31.787722   18479 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:40:31.788183   18479 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:40:31.818540   18479 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1120 20:40:31.819551   18479 start.go:309] selected driver: kvm2
	I1120 20:40:31.819563   18479 start.go:930] validating driver "kvm2" against &{Name:functional-933412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-933412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:40:31.819669   18479 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:40:31.821570   18479 out.go:203] 
	W1120 20:40:31.822645   18479 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1120 20:40:31.823787   18479 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh -n functional-933412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cp functional-933412:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2552378709/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh -n functional-933412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh -n functional-933412 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7706/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo cat /etc/test/nested/copy/7706/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7706.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo cat /etc/ssl/certs/7706.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7706.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo cat /usr/share/ca-certificates/7706.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/77062.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo cat /etc/ssl/certs/77062.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/77062.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo cat /usr/share/ca-certificates/77062.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-933412 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 ssh "sudo systemctl is-active docker": exit status 1 (178.176026ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 ssh "sudo systemctl is-active containerd": exit status 1 (185.109911ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "235.60684ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.328797ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "235.078159ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.31165ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (66.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdany-port887797097/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763671161701273709" to /tmp/TestFunctionalparallelMountCmdany-port887797097/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763671161701273709" to /tmp/TestFunctionalparallelMountCmdany-port887797097/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763671161701273709" to /tmp/TestFunctionalparallelMountCmdany-port887797097/001/test-1763671161701273709
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (151.298998ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:39:21.852874    7706 retry.go:31] will retry after 335.895441ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 20 20:39 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 20 20:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 20 20:39 test-1763671161701273709
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh cat /mount-9p/test-1763671161701273709
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-933412 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [18caa9ca-3098-4bdc-baf5-124bf98f0577] Pending
helpers_test.go:352: "busybox-mount" [18caa9ca-3098-4bdc-baf5-124bf98f0577] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [18caa9ca-3098-4bdc-baf5-124bf98f0577] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [18caa9ca-3098-4bdc-baf5-124bf98f0577] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m5.003528509s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-933412 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdany-port887797097/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (66.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdspecific-port3575881837/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (147.770298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:40:28.625803    7706 retry.go:31] will retry after 413.948751ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdspecific-port3575881837/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 ssh "sudo umount -f /mount-9p": exit status 1 (150.402306ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-933412 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdspecific-port3575881837/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T" /mount1: exit status 1 (170.087575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:40:29.868738    7706 retry.go:31] will retry after 700.103818ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-933412 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933412 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2509048637/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933412 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-933412
localhost/kicbase/echo-server:functional-933412
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933412 image ls --format short --alsologtostderr:
I1120 20:45:37.898664   20123 out.go:360] Setting OutFile to fd 1 ...
I1120 20:45:37.898918   20123 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:37.898928   20123 out.go:374] Setting ErrFile to fd 2...
I1120 20:45:37.898932   20123 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:37.899130   20123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
I1120 20:45:37.899666   20123 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:37.899759   20123 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:37.901950   20123 ssh_runner.go:195] Run: systemctl --version
I1120 20:45:37.904542   20123 main.go:143] libmachine: domain functional-933412 has defined MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:37.904993   20123 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:98:26", ip: ""} in network mk-functional-933412: {Iface:virbr1 ExpiryTime:2025-11-20 21:36:50 +0000 UTC Type:0 Mac:52:54:00:aa:98:26 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:functional-933412 Clientid:01:52:54:00:aa:98:26}
I1120 20:45:37.905020   20123 main.go:143] libmachine: domain functional-933412 has defined IP address 192.168.39.212 and MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:37.905174   20123 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/functional-933412/id_rsa Username:docker}
I1120 20:45:37.994301   20123 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933412 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-933412  │ 9056ab77afb8e │ 4.94MB │
│ localhost/my-image                      │ functional-933412  │ cd99ae00acfe1 │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-933412  │ e99081a6baf88 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933412 image ls --format table --alsologtostderr:
I1120 20:45:41.328368   20204 out.go:360] Setting OutFile to fd 1 ...
I1120 20:45:41.329053   20204 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:41.329076   20204 out.go:374] Setting ErrFile to fd 2...
I1120 20:45:41.329083   20204 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:41.329498   20204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
I1120 20:45:41.330547   20204 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:41.330647   20204 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:41.332889   20204 ssh_runner.go:195] Run: systemctl --version
I1120 20:45:41.335004   20204 main.go:143] libmachine: domain functional-933412 has defined MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:41.335406   20204 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:98:26", ip: ""} in network mk-functional-933412: {Iface:virbr1 ExpiryTime:2025-11-20 21:36:50 +0000 UTC Type:0 Mac:52:54:00:aa:98:26 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:functional-933412 Clientid:01:52:54:00:aa:98:26}
I1120 20:45:41.335431   20204 main.go:143] libmachine: domain functional-933412 has defined IP address 192.168.39.212 and MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:41.335541   20204 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/functional-933412/id_rsa Username:docker}
I1120 20:45:41.418280   20204 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933412 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-933412"],"size":"4943877"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"}
,{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"a6381b543729fc7adbed222002160b0f1e00dad2168c56b20eb74e096befa8e2","repoDigests":["docker.io/library/fc735eca5050378dbddfb26eddb0c6827aab8546a88f0db44646ab4e6a87245c-tmp@sha256:293ed3153fecbdb2c6b241ba491b1a085053eb7e86635392b6f77649b8239782"],"repoTags":[],"size":"1466018"},{"id":"c3994bc6961024917ec0
aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["
registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikub
e/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"e99081a6baf88fed1e7c035febe490c6260ee43d6252e259470fe0ed1efc2e43","repoDigests":["localhost/minikube-local-cache-test@sha256:e0ff1238b00c2e7fe3c834b0b3f4d32268852e3ed109461619a68873527289fc"],"repoTags":["localhost/minikube-local-cache-test:functional-933412"],"size":"3330"},{"id":"cd99ae00acfe19bba26bdc7cee624c7f79d1ce7b7bb31238e2e4a0847fea0bcd","repoDigests":["localhost/my-image@sha256:1f9d37bbe0e764342ae559599f359226c654f1e6db227b5e2c998005c0213bd8"],"repoTags":["localhost/my-image:functional-933412"],"size":"1468600"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"6
86139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933412 image ls --format json --alsologtostderr:
I1120 20:45:41.145530   20194 out.go:360] Setting OutFile to fd 1 ...
I1120 20:45:41.145633   20194 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:41.145641   20194 out.go:374] Setting ErrFile to fd 2...
I1120 20:45:41.145645   20194 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:41.145811   20194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
I1120 20:45:41.146363   20194 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:41.146453   20194 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:41.148314   20194 ssh_runner.go:195] Run: systemctl --version
I1120 20:45:41.150470   20194 main.go:143] libmachine: domain functional-933412 has defined MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:41.150840   20194 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:98:26", ip: ""} in network mk-functional-933412: {Iface:virbr1 ExpiryTime:2025-11-20 21:36:50 +0000 UTC Type:0 Mac:52:54:00:aa:98:26 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:functional-933412 Clientid:01:52:54:00:aa:98:26}
I1120 20:45:41.150877   20194 main.go:143] libmachine: domain functional-933412 has defined IP address 192.168.39.212 and MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:41.151014   20194 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/functional-933412/id_rsa Username:docker}
I1120 20:45:41.230038   20194 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933412 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-933412
size: "4943877"
- id: e99081a6baf88fed1e7c035febe490c6260ee43d6252e259470fe0ed1efc2e43
repoDigests:
- localhost/minikube-local-cache-test@sha256:e0ff1238b00c2e7fe3c834b0b3f4d32268852e3ed109461619a68873527289fc
repoTags:
- localhost/minikube-local-cache-test:functional-933412
size: "3330"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933412 image ls --format yaml --alsologtostderr:
I1120 20:45:38.095893   20134 out.go:360] Setting OutFile to fd 1 ...
I1120 20:45:38.096148   20134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:38.096163   20134 out.go:374] Setting ErrFile to fd 2...
I1120 20:45:38.096168   20134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:38.096350   20134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
I1120 20:45:38.096894   20134 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:38.096997   20134 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:38.098889   20134 ssh_runner.go:195] Run: systemctl --version
I1120 20:45:38.101055   20134 main.go:143] libmachine: domain functional-933412 has defined MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:38.101752   20134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:98:26", ip: ""} in network mk-functional-933412: {Iface:virbr1 ExpiryTime:2025-11-20 21:36:50 +0000 UTC Type:0 Mac:52:54:00:aa:98:26 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:functional-933412 Clientid:01:52:54:00:aa:98:26}
I1120 20:45:38.101784   20134 main.go:143] libmachine: domain functional-933412 has defined IP address 192.168.39.212 and MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:38.101968   20134 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/functional-933412/id_rsa Username:docker}
I1120 20:45:38.181491   20134 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933412 ssh pgrep buildkitd: exit status 1 (151.784176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image build -t localhost/my-image:functional-933412 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 image build -t localhost/my-image:functional-933412 testdata/build --alsologtostderr: (2.517137668s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933412 image build -t localhost/my-image:functional-933412 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a6381b54372
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-933412
--> cd99ae00acf
Successfully tagged localhost/my-image:functional-933412
cd99ae00acfe19bba26bdc7cee624c7f79d1ce7b7bb31238e2e4a0847fea0bcd
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933412 image build -t localhost/my-image:functional-933412 testdata/build --alsologtostderr:
I1120 20:45:38.432193   20156 out.go:360] Setting OutFile to fd 1 ...
I1120 20:45:38.432377   20156 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:38.432387   20156 out.go:374] Setting ErrFile to fd 2...
I1120 20:45:38.432391   20156 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:45:38.432597   20156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
I1120 20:45:38.433162   20156 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:38.433889   20156 config.go:182] Loaded profile config "functional-933412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:45:38.435979   20156 ssh_runner.go:195] Run: systemctl --version
I1120 20:45:38.438367   20156 main.go:143] libmachine: domain functional-933412 has defined MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:38.438902   20156 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:98:26", ip: ""} in network mk-functional-933412: {Iface:virbr1 ExpiryTime:2025-11-20 21:36:50 +0000 UTC Type:0 Mac:52:54:00:aa:98:26 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:functional-933412 Clientid:01:52:54:00:aa:98:26}
I1120 20:45:38.438940   20156 main.go:143] libmachine: domain functional-933412 has defined IP address 192.168.39.212 and MAC address 52:54:00:aa:98:26 in network mk-functional-933412
I1120 20:45:38.439132   20156 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/functional-933412/id_rsa Username:docker}
I1120 20:45:38.519020   20156 build_images.go:162] Building image from path: /tmp/build.1463924294.tar
I1120 20:45:38.519092   20156 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1120 20:45:38.533421   20156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1463924294.tar
I1120 20:45:38.538971   20156 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1463924294.tar: stat -c "%s %y" /var/lib/minikube/build/build.1463924294.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1463924294.tar': No such file or directory
I1120 20:45:38.539005   20156 ssh_runner.go:362] scp /tmp/build.1463924294.tar --> /var/lib/minikube/build/build.1463924294.tar (3072 bytes)
I1120 20:45:38.572517   20156 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1463924294
I1120 20:45:38.586245   20156 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1463924294 -xf /var/lib/minikube/build/build.1463924294.tar
I1120 20:45:38.599443   20156 crio.go:315] Building image: /var/lib/minikube/build/build.1463924294
I1120 20:45:38.599515   20156 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-933412 /var/lib/minikube/build/build.1463924294 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1120 20:45:40.857340   20156 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-933412 /var/lib/minikube/build/build.1463924294 --cgroup-manager=cgroupfs: (2.257803663s)
I1120 20:45:40.857429   20156 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1463924294
I1120 20:45:40.871717   20156 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1463924294.tar
I1120 20:45:40.885862   20156 build_images.go:218] Built localhost/my-image:functional-933412 from /tmp/build.1463924294.tar
I1120 20:45:40.885916   20156 build_images.go:134] succeeded building to: functional-933412
I1120 20:45:40.885921   20156 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-933412
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image load --daemon kicbase/echo-server:functional-933412 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 image load --daemon kicbase/echo-server:functional-933412 --alsologtostderr: (1.08705765s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image load --daemon kicbase/echo-server:functional-933412 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-933412
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image load --daemon kicbase/echo-server:functional-933412 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image save kicbase/echo-server:functional-933412 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image rm kicbase/echo-server:functional-933412 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-933412
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 image save --daemon kicbase/echo-server:functional-933412 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-933412
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 service list: (1.215528575s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-933412 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-933412 service list -o json: (1.22561854s)
functional_test.go:1504: Took "1.225701061s" to run "out/minikube-linux-amd64 -p functional-933412 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-933412
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-933412
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-933412
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (221.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1120 20:55:59.395538    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:20.328609    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:20.335046    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:20.346464    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:20.367915    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:20.409347    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:20.491695    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:20.652994    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:20.974906    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m41.172938412s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5
E1120 20:59:21.617016    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (221.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- rollout status deployment/busybox
E1120 20:59:22.898444    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:25.461319    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 kubectl -- rollout status deployment/busybox: (4.353620059s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-cps9z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-kw4kg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-p6xwp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-cps9z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-kw4kg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-p6xwp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-cps9z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-kw4kg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-p6xwp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-cps9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-cps9z -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-kw4kg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-kw4kg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-p6xwp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 kubectl -- exec busybox-7b57f96db7-p6xwp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 node add --alsologtostderr -v 5
E1120 20:59:30.583062    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:36.328037    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:59:40.825269    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:00:01.307286    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 node add --alsologtostderr -v 5: (43.993267467s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-920024 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp testdata/cp-test.txt ha-920024:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3325025645/001/cp-test_ha-920024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024:/home/docker/cp-test.txt ha-920024-m02:/home/docker/cp-test_ha-920024_ha-920024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m02 "sudo cat /home/docker/cp-test_ha-920024_ha-920024-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024:/home/docker/cp-test.txt ha-920024-m03:/home/docker/cp-test_ha-920024_ha-920024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m03 "sudo cat /home/docker/cp-test_ha-920024_ha-920024-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024:/home/docker/cp-test.txt ha-920024-m04:/home/docker/cp-test_ha-920024_ha-920024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m04 "sudo cat /home/docker/cp-test_ha-920024_ha-920024-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp testdata/cp-test.txt ha-920024-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3325025645/001/cp-test_ha-920024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m02:/home/docker/cp-test.txt ha-920024:/home/docker/cp-test_ha-920024-m02_ha-920024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024 "sudo cat /home/docker/cp-test_ha-920024-m02_ha-920024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m02:/home/docker/cp-test.txt ha-920024-m03:/home/docker/cp-test_ha-920024-m02_ha-920024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m03 "sudo cat /home/docker/cp-test_ha-920024-m02_ha-920024-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m02:/home/docker/cp-test.txt ha-920024-m04:/home/docker/cp-test_ha-920024-m02_ha-920024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m04 "sudo cat /home/docker/cp-test_ha-920024-m02_ha-920024-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp testdata/cp-test.txt ha-920024-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3325025645/001/cp-test_ha-920024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m03:/home/docker/cp-test.txt ha-920024:/home/docker/cp-test_ha-920024-m03_ha-920024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024 "sudo cat /home/docker/cp-test_ha-920024-m03_ha-920024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m03:/home/docker/cp-test.txt ha-920024-m02:/home/docker/cp-test_ha-920024-m03_ha-920024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m02 "sudo cat /home/docker/cp-test_ha-920024-m03_ha-920024-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m03:/home/docker/cp-test.txt ha-920024-m04:/home/docker/cp-test_ha-920024-m03_ha-920024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m04 "sudo cat /home/docker/cp-test_ha-920024-m03_ha-920024-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp testdata/cp-test.txt ha-920024-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3325025645/001/cp-test_ha-920024-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m04:/home/docker/cp-test.txt ha-920024:/home/docker/cp-test_ha-920024-m04_ha-920024.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024 "sudo cat /home/docker/cp-test_ha-920024-m04_ha-920024.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m04:/home/docker/cp-test.txt ha-920024-m02:/home/docker/cp-test_ha-920024-m04_ha-920024-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m02 "sudo cat /home/docker/cp-test_ha-920024-m04_ha-920024-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 cp ha-920024-m04:/home/docker/cp-test.txt ha-920024-m03:/home/docker/cp-test_ha-920024-m04_ha-920024-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 ssh -n ha-920024-m03 "sudo cat /home/docker/cp-test_ha-920024-m04_ha-920024-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (75.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 node stop m02 --alsologtostderr -v 5
E1120 21:00:42.270330    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 node stop m02 --alsologtostderr -v 5: (1m14.73375182s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5: exit status 7 (526.600555ms)

                                                
                                                
-- stdout --
	ha-920024
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-920024-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920024-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-920024-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:01:41.066366   25594 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:01:41.066600   25594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:01:41.066609   25594 out.go:374] Setting ErrFile to fd 2...
	I1120 21:01:41.066613   25594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:01:41.066824   25594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:01:41.067006   25594 out.go:368] Setting JSON to false
	I1120 21:01:41.067037   25594 mustload.go:66] Loading cluster: ha-920024
	I1120 21:01:41.067127   25594 notify.go:221] Checking for updates...
	I1120 21:01:41.067385   25594 config.go:182] Loaded profile config "ha-920024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:01:41.067400   25594 status.go:174] checking status of ha-920024 ...
	I1120 21:01:41.069575   25594 status.go:371] ha-920024 host status = "Running" (err=<nil>)
	I1120 21:01:41.069592   25594 host.go:66] Checking if "ha-920024" exists ...
	I1120 21:01:41.071789   25594 main.go:143] libmachine: domain ha-920024 has defined MAC address 52:54:00:c9:96:93 in network mk-ha-920024
	I1120 21:01:41.072243   25594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c9:96:93", ip: ""} in network mk-ha-920024: {Iface:virbr1 ExpiryTime:2025-11-20 21:55:56 +0000 UTC Type:0 Mac:52:54:00:c9:96:93 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-920024 Clientid:01:52:54:00:c9:96:93}
	I1120 21:01:41.072278   25594 main.go:143] libmachine: domain ha-920024 has defined IP address 192.168.39.226 and MAC address 52:54:00:c9:96:93 in network mk-ha-920024
	I1120 21:01:41.072433   25594 host.go:66] Checking if "ha-920024" exists ...
	I1120 21:01:41.072634   25594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:01:41.074666   25594 main.go:143] libmachine: domain ha-920024 has defined MAC address 52:54:00:c9:96:93 in network mk-ha-920024
	I1120 21:01:41.075031   25594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c9:96:93", ip: ""} in network mk-ha-920024: {Iface:virbr1 ExpiryTime:2025-11-20 21:55:56 +0000 UTC Type:0 Mac:52:54:00:c9:96:93 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:ha-920024 Clientid:01:52:54:00:c9:96:93}
	I1120 21:01:41.075051   25594 main.go:143] libmachine: domain ha-920024 has defined IP address 192.168.39.226 and MAC address 52:54:00:c9:96:93 in network mk-ha-920024
	I1120 21:01:41.075186   25594 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/ha-920024/id_rsa Username:docker}
	I1120 21:01:41.160597   25594 ssh_runner.go:195] Run: systemctl --version
	I1120 21:01:41.167634   25594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:01:41.187619   25594 kubeconfig.go:125] found "ha-920024" server: "https://192.168.39.254:8443"
	I1120 21:01:41.187659   25594 api_server.go:166] Checking apiserver status ...
	I1120 21:01:41.187723   25594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:01:41.208929   25594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup
	W1120 21:01:41.220929   25594 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:01:41.220996   25594 ssh_runner.go:195] Run: ls
	I1120 21:01:41.227405   25594 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1120 21:01:41.234653   25594 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1120 21:01:41.234680   25594 status.go:463] ha-920024 apiserver status = Running (err=<nil>)
	I1120 21:01:41.234700   25594 status.go:176] ha-920024 status: &{Name:ha-920024 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:01:41.234715   25594 status.go:174] checking status of ha-920024-m02 ...
	I1120 21:01:41.236361   25594 status.go:371] ha-920024-m02 host status = "Stopped" (err=<nil>)
	I1120 21:01:41.236385   25594 status.go:384] host is not running, skipping remaining checks
	I1120 21:01:41.236393   25594 status.go:176] ha-920024-m02 status: &{Name:ha-920024-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:01:41.236416   25594 status.go:174] checking status of ha-920024-m03 ...
	I1120 21:01:41.237836   25594 status.go:371] ha-920024-m03 host status = "Running" (err=<nil>)
	I1120 21:01:41.237882   25594 host.go:66] Checking if "ha-920024-m03" exists ...
	I1120 21:01:41.240506   25594 main.go:143] libmachine: domain ha-920024-m03 has defined MAC address 52:54:00:dc:c6:de in network mk-ha-920024
	I1120 21:01:41.240972   25594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:c6:de", ip: ""} in network mk-ha-920024: {Iface:virbr1 ExpiryTime:2025-11-20 21:58:17 +0000 UTC Type:0 Mac:52:54:00:dc:c6:de Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:ha-920024-m03 Clientid:01:52:54:00:dc:c6:de}
	I1120 21:01:41.241004   25594 main.go:143] libmachine: domain ha-920024-m03 has defined IP address 192.168.39.124 and MAC address 52:54:00:dc:c6:de in network mk-ha-920024
	I1120 21:01:41.241167   25594 host.go:66] Checking if "ha-920024-m03" exists ...
	I1120 21:01:41.241433   25594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:01:41.243819   25594 main.go:143] libmachine: domain ha-920024-m03 has defined MAC address 52:54:00:dc:c6:de in network mk-ha-920024
	I1120 21:01:41.244237   25594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:c6:de", ip: ""} in network mk-ha-920024: {Iface:virbr1 ExpiryTime:2025-11-20 21:58:17 +0000 UTC Type:0 Mac:52:54:00:dc:c6:de Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:ha-920024-m03 Clientid:01:52:54:00:dc:c6:de}
	I1120 21:01:41.244267   25594 main.go:143] libmachine: domain ha-920024-m03 has defined IP address 192.168.39.124 and MAC address 52:54:00:dc:c6:de in network mk-ha-920024
	I1120 21:01:41.244423   25594 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/ha-920024-m03/id_rsa Username:docker}
	I1120 21:01:41.333401   25594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:01:41.354596   25594 kubeconfig.go:125] found "ha-920024" server: "https://192.168.39.254:8443"
	I1120 21:01:41.354637   25594 api_server.go:166] Checking apiserver status ...
	I1120 21:01:41.354684   25594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:01:41.382037   25594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1850/cgroup
	W1120 21:01:41.396553   25594 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1850/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:01:41.396619   25594 ssh_runner.go:195] Run: ls
	I1120 21:01:41.405345   25594 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1120 21:01:41.412086   25594 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1120 21:01:41.412113   25594 status.go:463] ha-920024-m03 apiserver status = Running (err=<nil>)
	I1120 21:01:41.412124   25594 status.go:176] ha-920024-m03 status: &{Name:ha-920024-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:01:41.412145   25594 status.go:174] checking status of ha-920024-m04 ...
	I1120 21:01:41.414187   25594 status.go:371] ha-920024-m04 host status = "Running" (err=<nil>)
	I1120 21:01:41.414210   25594 host.go:66] Checking if "ha-920024-m04" exists ...
	I1120 21:01:41.417500   25594 main.go:143] libmachine: domain ha-920024-m04 has defined MAC address 52:54:00:9f:88:0e in network mk-ha-920024
	I1120 21:01:41.418083   25594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9f:88:0e", ip: ""} in network mk-ha-920024: {Iface:virbr1 ExpiryTime:2025-11-20 21:59:47 +0000 UTC Type:0 Mac:52:54:00:9f:88:0e Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-920024-m04 Clientid:01:52:54:00:9f:88:0e}
	I1120 21:01:41.418116   25594 main.go:143] libmachine: domain ha-920024-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:9f:88:0e in network mk-ha-920024
	I1120 21:01:41.418332   25594 host.go:66] Checking if "ha-920024-m04" exists ...
	I1120 21:01:41.418635   25594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:01:41.421292   25594 main.go:143] libmachine: domain ha-920024-m04 has defined MAC address 52:54:00:9f:88:0e in network mk-ha-920024
	I1120 21:01:41.421817   25594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9f:88:0e", ip: ""} in network mk-ha-920024: {Iface:virbr1 ExpiryTime:2025-11-20 21:59:47 +0000 UTC Type:0 Mac:52:54:00:9f:88:0e Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-920024-m04 Clientid:01:52:54:00:9f:88:0e}
	I1120 21:01:41.421896   25594 main.go:143] libmachine: domain ha-920024-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:9f:88:0e in network mk-ha-920024
	I1120 21:01:41.422082   25594 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/ha-920024-m04/id_rsa Username:docker}
	I1120 21:01:41.513031   25594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:01:41.531722   25594 status.go:176] ha-920024-m04 status: &{Name:ha-920024-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (75.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 node start m02 --alsologtostderr -v 5
E1120 21:02:04.191682    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 node start m02 --alsologtostderr -v 5: (42.939750709s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (306.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 stop --alsologtostderr -v 5
E1120 21:04:20.329062    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:04:36.328884    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:04:48.033835    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 stop --alsologtostderr -v 5: (2m59.90887904s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 start --wait true --alsologtostderr -v 5: (2m6.312097046s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (306.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 node delete m03 --alsologtostderr -v 5: (17.706786032s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (167.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 stop --alsologtostderr -v 5
E1120 21:09:20.328574    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:09:36.330066    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 stop --alsologtostderr -v 5: (2m47.50002578s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5: exit status 7 (64.566065ms)

                                                
                                                
-- stdout --
	ha-920024
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920024-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920024-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:10:39.408927   28389 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:10:39.409199   28389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:39.409208   28389 out.go:374] Setting ErrFile to fd 2...
	I1120 21:10:39.409211   28389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:10:39.409436   28389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:10:39.409591   28389 out.go:368] Setting JSON to false
	I1120 21:10:39.409620   28389 mustload.go:66] Loading cluster: ha-920024
	I1120 21:10:39.409673   28389 notify.go:221] Checking for updates...
	I1120 21:10:39.410064   28389 config.go:182] Loaded profile config "ha-920024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:10:39.410082   28389 status.go:174] checking status of ha-920024 ...
	I1120 21:10:39.412115   28389 status.go:371] ha-920024 host status = "Stopped" (err=<nil>)
	I1120 21:10:39.412134   28389 status.go:384] host is not running, skipping remaining checks
	I1120 21:10:39.412140   28389 status.go:176] ha-920024 status: &{Name:ha-920024 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:10:39.412160   28389 status.go:174] checking status of ha-920024-m02 ...
	I1120 21:10:39.413268   28389 status.go:371] ha-920024-m02 host status = "Stopped" (err=<nil>)
	I1120 21:10:39.413285   28389 status.go:384] host is not running, skipping remaining checks
	I1120 21:10:39.413305   28389 status.go:176] ha-920024-m02 status: &{Name:ha-920024-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:10:39.413324   28389 status.go:174] checking status of ha-920024-m04 ...
	I1120 21:10:39.414470   28389 status.go:371] ha-920024-m04 host status = "Stopped" (err=<nil>)
	I1120 21:10:39.414485   28389 status.go:384] host is not running, skipping remaining checks
	I1120 21:10:39.414490   28389 status.go:176] ha-920024-m04 status: &{Name:ha-920024-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (167.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m37.644890859s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (98.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 node add --control-plane --alsologtostderr -v 5
E1120 21:12:39.397729    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-920024 node add --control-plane --alsologtostderr -v 5: (1m15.458738126s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-920024 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-810697 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1120 21:14:20.329428    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:14:36.332573    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-810697 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.753967922s)
--- PASS: TestJSONOutput/start/Command (83.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-810697 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-810697 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-810697 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-810697 --output=json --user=testUser: (7.05256034s)
--- PASS: TestJSONOutput/stop/Command (7.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-362702 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-362702 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.545991ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b6914a8-bf3d-4f3f-a2fc-3d2c10192fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-362702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2e6f6a9-18f4-48d2-9006-5faf04db4526","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"cf2a2bdb-4bb6-48b9-8438-99c7b11f21c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"439cbf56-8167-4b5d-9115-302806d1d2c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig"}}
	{"specversion":"1.0","id":"1f7282d5-7cc2-4287-bd6d-e739965518f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube"}}
	{"specversion":"1.0","id":"f6ad2f13-8159-46db-b48e-b9a662a7b320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"28d1c52d-be97-4dc6-b23e-b9709ea5467c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"111e170d-94a8-466b-b4ba-8757e8f84b3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-362702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-362702
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (85.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-631866 --driver=kvm2  --container-runtime=crio
E1120 21:15:43.397525    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-631866 --driver=kvm2  --container-runtime=crio: (39.820051203s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-634015 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-634015 --driver=kvm2  --container-runtime=crio: (43.033890572s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-631866
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-634015
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-634015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-634015
helpers_test.go:175: Cleaning up "first-631866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-631866
--- PASS: TestMinikubeProfile (85.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-780886 --memory=3072 --mount-string /tmp/TestMountStartserial1722899227/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-780886 --memory=3072 --mount-string /tmp/TestMountStartserial1722899227/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.823267205s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-780886 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-780886 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-798707 --memory=3072 --mount-string /tmp/TestMountStartserial1722899227/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-798707 --memory=3072 --mount-string /tmp/TestMountStartserial1722899227/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.781808362s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798707 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798707 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-780886 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798707 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798707 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-798707
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-798707: (1.334617418s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-798707
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-798707: (17.221763532s)
--- PASS: TestMountStart/serial/RestartStopped (18.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798707 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-798707 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-213052 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1120 21:19:20.328703    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:19:36.327253    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-213052 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m14.235217239s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-213052 -- rollout status deployment/busybox: (3.719820698s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-4xhbw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-h2kqf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-4xhbw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-h2kqf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-4xhbw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-h2kqf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-4xhbw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-4xhbw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-h2kqf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-213052 -- exec busybox-7b57f96db7-h2kqf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-213052 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-213052 -v=5 --alsologtostderr: (44.741704501s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.20s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-213052 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp testdata/cp-test.txt multinode-213052:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile993522990/001/cp-test_multinode-213052.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052:/home/docker/cp-test.txt multinode-213052-m02:/home/docker/cp-test_multinode-213052_multinode-213052-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m02 "sudo cat /home/docker/cp-test_multinode-213052_multinode-213052-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052:/home/docker/cp-test.txt multinode-213052-m03:/home/docker/cp-test_multinode-213052_multinode-213052-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m03 "sudo cat /home/docker/cp-test_multinode-213052_multinode-213052-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp testdata/cp-test.txt multinode-213052-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile993522990/001/cp-test_multinode-213052-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052-m02:/home/docker/cp-test.txt multinode-213052:/home/docker/cp-test_multinode-213052-m02_multinode-213052.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052 "sudo cat /home/docker/cp-test_multinode-213052-m02_multinode-213052.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052-m02:/home/docker/cp-test.txt multinode-213052-m03:/home/docker/cp-test_multinode-213052-m02_multinode-213052-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m03 "sudo cat /home/docker/cp-test_multinode-213052-m02_multinode-213052-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp testdata/cp-test.txt multinode-213052-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile993522990/001/cp-test_multinode-213052-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052-m03:/home/docker/cp-test.txt multinode-213052:/home/docker/cp-test_multinode-213052-m03_multinode-213052.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052 "sudo cat /home/docker/cp-test_multinode-213052-m03_multinode-213052.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 cp multinode-213052-m03:/home/docker/cp-test.txt multinode-213052-m02:/home/docker/cp-test_multinode-213052-m03_multinode-213052-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 ssh -n multinode-213052-m02 "sudo cat /home/docker/cp-test_multinode-213052-m03_multinode-213052-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-213052 node stop m03: (1.648066035s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-213052 status: exit status 7 (327.665578ms)

                                                
                                                
-- stdout --
	multinode-213052
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-213052-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-213052-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-213052 status --alsologtostderr: exit status 7 (338.47158ms)

                                                
                                                
-- stdout --
	multinode-213052
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-213052-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-213052-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:20:56.308947   34594 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:20:56.309072   34594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:20:56.309081   34594 out.go:374] Setting ErrFile to fd 2...
	I1120 21:20:56.309085   34594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:20:56.309285   34594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:20:56.309443   34594 out.go:368] Setting JSON to false
	I1120 21:20:56.309471   34594 mustload.go:66] Loading cluster: multinode-213052
	I1120 21:20:56.309599   34594 notify.go:221] Checking for updates...
	I1120 21:20:56.309835   34594 config.go:182] Loaded profile config "multinode-213052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:20:56.309860   34594 status.go:174] checking status of multinode-213052 ...
	I1120 21:20:56.311863   34594 status.go:371] multinode-213052 host status = "Running" (err=<nil>)
	I1120 21:20:56.311884   34594 host.go:66] Checking if "multinode-213052" exists ...
	I1120 21:20:56.314570   34594 main.go:143] libmachine: domain multinode-213052 has defined MAC address 52:54:00:27:db:3d in network mk-multinode-213052
	I1120 21:20:56.314998   34594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:db:3d", ip: ""} in network mk-multinode-213052: {Iface:virbr1 ExpiryTime:2025-11-20 22:17:57 +0000 UTC Type:0 Mac:52:54:00:27:db:3d Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-213052 Clientid:01:52:54:00:27:db:3d}
	I1120 21:20:56.315024   34594 main.go:143] libmachine: domain multinode-213052 has defined IP address 192.168.39.173 and MAC address 52:54:00:27:db:3d in network mk-multinode-213052
	I1120 21:20:56.315163   34594 host.go:66] Checking if "multinode-213052" exists ...
	I1120 21:20:56.315369   34594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:20:56.317287   34594 main.go:143] libmachine: domain multinode-213052 has defined MAC address 52:54:00:27:db:3d in network mk-multinode-213052
	I1120 21:20:56.317628   34594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:db:3d", ip: ""} in network mk-multinode-213052: {Iface:virbr1 ExpiryTime:2025-11-20 22:17:57 +0000 UTC Type:0 Mac:52:54:00:27:db:3d Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-213052 Clientid:01:52:54:00:27:db:3d}
	I1120 21:20:56.317649   34594 main.go:143] libmachine: domain multinode-213052 has defined IP address 192.168.39.173 and MAC address 52:54:00:27:db:3d in network mk-multinode-213052
	I1120 21:20:56.317809   34594 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/multinode-213052/id_rsa Username:docker}
	I1120 21:20:56.407955   34594 ssh_runner.go:195] Run: systemctl --version
	I1120 21:20:56.414913   34594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:20:56.437684   34594 kubeconfig.go:125] found "multinode-213052" server: "https://192.168.39.173:8443"
	I1120 21:20:56.437726   34594 api_server.go:166] Checking apiserver status ...
	I1120 21:20:56.437802   34594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:20:56.458945   34594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup
	W1120 21:20:56.471102   34594 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:20:56.471177   34594 ssh_runner.go:195] Run: ls
	I1120 21:20:56.476438   34594 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I1120 21:20:56.481968   34594 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I1120 21:20:56.481993   34594 status.go:463] multinode-213052 apiserver status = Running (err=<nil>)
	I1120 21:20:56.482005   34594 status.go:176] multinode-213052 status: &{Name:multinode-213052 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:20:56.482030   34594 status.go:174] checking status of multinode-213052-m02 ...
	I1120 21:20:56.483462   34594 status.go:371] multinode-213052-m02 host status = "Running" (err=<nil>)
	I1120 21:20:56.483481   34594 host.go:66] Checking if "multinode-213052-m02" exists ...
	I1120 21:20:56.485927   34594 main.go:143] libmachine: domain multinode-213052-m02 has defined MAC address 52:54:00:b7:9c:ff in network mk-multinode-213052
	I1120 21:20:56.486284   34594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:9c:ff", ip: ""} in network mk-multinode-213052: {Iface:virbr1 ExpiryTime:2025-11-20 22:19:26 +0000 UTC Type:0 Mac:52:54:00:b7:9c:ff Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-213052-m02 Clientid:01:52:54:00:b7:9c:ff}
	I1120 21:20:56.486310   34594 main.go:143] libmachine: domain multinode-213052-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:b7:9c:ff in network mk-multinode-213052
	I1120 21:20:56.486427   34594 host.go:66] Checking if "multinode-213052-m02" exists ...
	I1120 21:20:56.486622   34594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:20:56.488294   34594 main.go:143] libmachine: domain multinode-213052-m02 has defined MAC address 52:54:00:b7:9c:ff in network mk-multinode-213052
	I1120 21:20:56.488608   34594 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:9c:ff", ip: ""} in network mk-multinode-213052: {Iface:virbr1 ExpiryTime:2025-11-20 22:19:26 +0000 UTC Type:0 Mac:52:54:00:b7:9c:ff Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-213052-m02 Clientid:01:52:54:00:b7:9c:ff}
	I1120 21:20:56.488632   34594 main.go:143] libmachine: domain multinode-213052-m02 has defined IP address 192.168.39.178 and MAC address 52:54:00:b7:9c:ff in network mk-multinode-213052
	I1120 21:20:56.488757   34594 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21923-3793/.minikube/machines/multinode-213052-m02/id_rsa Username:docker}
	I1120 21:20:56.573147   34594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:20:56.590193   34594 status.go:176] multinode-213052-m02 status: &{Name:multinode-213052-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:20:56.590237   34594 status.go:174] checking status of multinode-213052-m03 ...
	I1120 21:20:56.591868   34594 status.go:371] multinode-213052-m03 host status = "Stopped" (err=<nil>)
	I1120 21:20:56.591888   34594 status.go:384] host is not running, skipping remaining checks
	I1120 21:20:56.591896   34594 status.go:176] multinode-213052-m03 status: &{Name:multinode-213052-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-213052 node start m03 -v=5 --alsologtostderr: (43.469601392s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (43.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (301.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-213052
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-213052
E1120 21:24:20.331357    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-213052: (2m44.407212411s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-213052 --wait=true -v=5 --alsologtostderr
E1120 21:24:36.327222    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-213052 --wait=true -v=5 --alsologtostderr: (2m16.643829608s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-213052
--- PASS: TestMultiNode/serial/RestartKeepsNodes (301.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-213052 node delete m03: (2.220191308s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (156.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 stop
E1120 21:29:19.400196    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:29:20.328489    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-213052 stop: (2m35.967903773s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-213052 status: exit status 7 (62.94895ms)

                                                
                                                
-- stdout --
	multinode-213052
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-213052-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-213052 status --alsologtostderr: exit status 7 (62.418213ms)

                                                
                                                
-- stdout --
	multinode-213052
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-213052-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:29:20.521722   36920 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:29:20.521864   36920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:29:20.521878   36920 out.go:374] Setting ErrFile to fd 2...
	I1120 21:29:20.521885   36920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:29:20.522111   36920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:29:20.522309   36920 out.go:368] Setting JSON to false
	I1120 21:29:20.522343   36920 mustload.go:66] Loading cluster: multinode-213052
	I1120 21:29:20.522370   36920 notify.go:221] Checking for updates...
	I1120 21:29:20.522777   36920 config.go:182] Loaded profile config "multinode-213052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:29:20.522797   36920 status.go:174] checking status of multinode-213052 ...
	I1120 21:29:20.525038   36920 status.go:371] multinode-213052 host status = "Stopped" (err=<nil>)
	I1120 21:29:20.525056   36920 status.go:384] host is not running, skipping remaining checks
	I1120 21:29:20.525062   36920 status.go:176] multinode-213052 status: &{Name:multinode-213052 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:29:20.525086   36920 status.go:174] checking status of multinode-213052-m02 ...
	I1120 21:29:20.526421   36920 status.go:371] multinode-213052-m02 host status = "Stopped" (err=<nil>)
	I1120 21:29:20.526434   36920 status.go:384] host is not running, skipping remaining checks
	I1120 21:29:20.526439   36920 status.go:176] multinode-213052-m02 status: &{Name:multinode-213052-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (156.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (126.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-213052 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1120 21:29:36.329085    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-213052 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m6.481334335s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-213052 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (126.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-213052
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-213052-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-213052-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (79.479512ms)

                                                
                                                
-- stdout --
	* [multinode-213052-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-213052-m02' is duplicated with machine name 'multinode-213052-m02' in profile 'multinode-213052'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-213052-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-213052-m03 --driver=kvm2  --container-runtime=crio: (42.399792665s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-213052
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-213052: exit status 80 (205.83241ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-213052 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-213052-m03 already exists in multinode-213052-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-213052-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.57s)

                                                
                                    
x
+
TestScheduledStopUnix (111.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-802518 --memory=3072 --driver=kvm2  --container-runtime=crio
E1120 21:34:36.329045    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-802518 --memory=3072 --driver=kvm2  --container-runtime=crio: (40.290775034s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-802518 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 21:35:05.440088   39338 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:35:05.440236   39338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:35:05.440246   39338 out.go:374] Setting ErrFile to fd 2...
	I1120 21:35:05.440250   39338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:35:05.440418   39338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:35:05.440658   39338 out.go:368] Setting JSON to false
	I1120 21:35:05.440741   39338 mustload.go:66] Loading cluster: scheduled-stop-802518
	I1120 21:35:05.441152   39338 config.go:182] Loaded profile config "scheduled-stop-802518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:35:05.441225   39338 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/config.json ...
	I1120 21:35:05.441393   39338 mustload.go:66] Loading cluster: scheduled-stop-802518
	I1120 21:35:05.441489   39338 config.go:182] Loaded profile config "scheduled-stop-802518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-802518 -n scheduled-stop-802518
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-802518 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 21:35:05.729601   39383 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:35:05.729730   39383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:35:05.729744   39383 out.go:374] Setting ErrFile to fd 2...
	I1120 21:35:05.729749   39383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:35:05.729980   39383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:35:05.730210   39383 out.go:368] Setting JSON to false
	I1120 21:35:05.730716   39383 daemonize_unix.go:73] killing process 39372 as it is an old scheduled stop
	I1120 21:35:05.730825   39383 mustload.go:66] Loading cluster: scheduled-stop-802518
	I1120 21:35:05.731750   39383 config.go:182] Loaded profile config "scheduled-stop-802518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:35:05.731844   39383 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/config.json ...
	I1120 21:35:05.732070   39383 mustload.go:66] Loading cluster: scheduled-stop-802518
	I1120 21:35:05.732216   39383 config.go:182] Loaded profile config "scheduled-stop-802518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1120 21:35:05.737062    7706 retry.go:31] will retry after 112.651µs: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.738265    7706 retry.go:31] will retry after 185.905µs: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.739431    7706 retry.go:31] will retry after 170.267µs: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.740661    7706 retry.go:31] will retry after 411.435µs: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.741818    7706 retry.go:31] will retry after 535.104µs: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.743033    7706 retry.go:31] will retry after 443.752µs: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.744172    7706 retry.go:31] will retry after 1.634055ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.746375    7706 retry.go:31] will retry after 2.476158ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.749625    7706 retry.go:31] will retry after 2.423984ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.752833    7706 retry.go:31] will retry after 4.331711ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.758081    7706 retry.go:31] will retry after 8.236337ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.767322    7706 retry.go:31] will retry after 12.761696ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.780597    7706 retry.go:31] will retry after 11.830588ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.792836    7706 retry.go:31] will retry after 19.696447ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
I1120 21:35:05.813099    7706 retry.go:31] will retry after 28.586819ms: open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-802518 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-802518 -n scheduled-stop-802518
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-802518
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-802518 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 21:35:31.411565   39532 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:35:31.411688   39532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:35:31.411700   39532 out.go:374] Setting ErrFile to fd 2...
	I1120 21:35:31.411708   39532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:35:31.411913   39532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:35:31.412159   39532 out.go:368] Setting JSON to false
	I1120 21:35:31.412239   39532 mustload.go:66] Loading cluster: scheduled-stop-802518
	I1120 21:35:31.412566   39532 config.go:182] Loaded profile config "scheduled-stop-802518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:35:31.412625   39532 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/scheduled-stop-802518/config.json ...
	I1120 21:35:31.412815   39532 mustload.go:66] Loading cluster: scheduled-stop-802518
	I1120 21:35:31.412929   39532 config.go:182] Loaded profile config "scheduled-stop-802518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-802518
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-802518: exit status 7 (57.934016ms)

                                                
                                                
-- stdout --
	scheduled-stop-802518
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-802518 -n scheduled-stop-802518
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-802518 -n scheduled-stop-802518: exit status 7 (58.774942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-802518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-802518
--- PASS: TestScheduledStopUnix (111.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (109.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4009730361 start -p running-upgrade-155504 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4009730361 start -p running-upgrade-155504 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (59.990586084s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-155504 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-155504 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.895332663s)
helpers_test.go:175: Cleaning up "running-upgrade-155504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-155504
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-155504: (1.089584247s)
--- PASS: TestRunningBinaryUpgrade (109.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (187.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.658933237s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-021825
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-021825: (1.936222829s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-021825 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-021825 status --format={{.Host}}: exit status 7 (64.205549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.015634412s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-021825 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (74.189197ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-021825] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-021825
	    minikube start -p kubernetes-upgrade-021825 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0218252 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-021825 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-021825 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.833398671s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-021825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-021825
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-021825: (1.205950313s)
--- PASS: TestKubernetesUpgrade (187.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-733370 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-733370 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (97.307189ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-733370] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (108.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-733370 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-733370 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m47.792798467s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-733370 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (108.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-733370 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-733370 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (18.055075297s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-733370 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-733370 status -o json: exit status 2 (231.73128ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-733370","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-733370
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-733370 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-733370 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.022592956s)
--- PASS: TestNoKubernetes/serial/Start (40.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21923-3793/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-733370 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-733370 "sudo systemctl is-active --quiet service kubelet": exit status 1 (177.146845ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-733370
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-733370: (1.465168402s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-733370 --driver=kvm2  --container-runtime=crio
E1120 21:39:20.327905    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:39:36.328062    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-733370 --driver=kvm2  --container-runtime=crio: (46.632794389s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-733370 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-733370 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.678361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3424930712 start -p stopped-upgrade-744498 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3424930712 start -p stopped-upgrade-744498 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m1.280923541s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3424930712 -p stopped-upgrade-744498 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3424930712 -p stopped-upgrade-744498 stop: (2.04804366s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-744498 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-744498 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.246973879s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (114.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-507207 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-507207 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.839133ms)

                                                
                                                
-- stdout --
	* [false-507207] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:39:58.223746   43489 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:39:58.224020   43489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:39:58.224029   43489 out.go:374] Setting ErrFile to fd 2...
	I1120 21:39:58.224033   43489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:39:58.224258   43489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-3793/.minikube/bin
	I1120 21:39:58.224711   43489 out.go:368] Setting JSON to false
	I1120 21:39:58.225549   43489 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4948,"bootTime":1763669850,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:39:58.225637   43489 start.go:143] virtualization: kvm guest
	I1120 21:39:58.227429   43489 out.go:179] * [false-507207] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:39:58.228742   43489 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:39:58.228729   43489 notify.go:221] Checking for updates...
	I1120 21:39:58.231180   43489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:39:58.232478   43489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-3793/kubeconfig
	I1120 21:39:58.233648   43489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-3793/.minikube
	I1120 21:39:58.234777   43489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:39:58.235966   43489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:39:58.237527   43489 config.go:182] Loaded profile config "cert-expiration-925075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:39:58.237622   43489 config.go:182] Loaded profile config "kubernetes-upgrade-021825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:39:58.237702   43489 config.go:182] Loaded profile config "stopped-upgrade-744498": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1120 21:39:58.237786   43489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:39:58.274839   43489 out.go:179] * Using the kvm2 driver based on user configuration
	I1120 21:39:58.276319   43489 start.go:309] selected driver: kvm2
	I1120 21:39:58.276336   43489 start.go:930] validating driver "kvm2" against <nil>
	I1120 21:39:58.276348   43489 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:39:58.278534   43489 out.go:203] 
	W1120 21:39:58.279789   43489 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1120 21:39:58.280788   43489 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-507207 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-507207" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:36:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.14:8443
name: cert-expiration-925075
contexts:
- context:
cluster: cert-expiration-925075
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:36:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-925075
name: cert-expiration-925075
current-context: ""
kind: Config
users:
- name: cert-expiration-925075
user:
client-certificate: /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/cert-expiration-925075/client.crt
client-key: /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/cert-expiration-925075/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-507207

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507207"

                                                
                                                
----------------------- debugLogs end: false-507207 [took: 3.246092107s] --------------------------------
helpers_test.go:175: Cleaning up "false-507207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-507207
--- PASS: TestNetworkPlugins/group/false (3.54s)

                                                
                                    
x
+
TestISOImage/Setup (36.29s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-304958 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-304958 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.289399472s)
--- PASS: TestISOImage/Setup (36.29s)

                                                
                                    
x
+
TestPause/serial/Start (105.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-763370 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-763370 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.781753442s)
--- PASS: TestPause/serial/Start (105.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (114.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m54.020523228s)
--- PASS: TestNetworkPlugins/group/auto/Start (114.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-744498
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-744498: (1.070030525s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (103.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m43.870122647s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (103.87s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which curl"
E1120 21:50:51.852024    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:51.858527    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (131.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m11.243777308s)
--- PASS: TestNetworkPlugins/group/calico/Start (131.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2kr6k" [3f4484e8-24b5-48c1-804d-10454e598ca3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00521939s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-507207 "pgrep -a kubelet"
I1120 21:43:34.523931    7706 config.go:182] Loaded profile config "auto-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-507207 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jljcm" [8dd63653-fff0-4db6-970c-f38472f1759e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jljcm" [8dd63653-fff0-4db6-970c-f38472f1759e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00555257s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-507207 "pgrep -a kubelet"
I1120 21:43:40.500694    7706 config.go:182] Loaded profile config "kindnet-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-507207 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2f8mj" [7cbddb58-b4a2-44e7-866b-b9b230f430f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2f8mj" [7cbddb58-b4a2-44e7-866b-b9b230f430f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007188306s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-507207 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-507207 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m13.282845204s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (109.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m49.370303722s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (109.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-v644b" [48867d19-565f-4b5a-85d0-2713453afce5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-v644b" [48867d19-565f-4b5a-85d0-2713453afce5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005598917s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (107.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m47.941893246s)
--- PASS: TestNetworkPlugins/group/flannel/Start (107.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-507207 "pgrep -a kubelet"
I1120 21:44:08.832278    7706 config.go:182] Loaded profile config "calico-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-507207 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-54lwg" [637a20f0-b639-4d20-a42a-2d3976a9aba6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-54lwg" [637a20f0-b639-4d20-a42a-2d3976a9aba6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003872993s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-507207 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1120 21:44:20.327976    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (113.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-507207 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m53.617262132s)
--- PASS: TestNetworkPlugins/group/bridge/Start (113.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-507207 "pgrep -a kubelet"
I1120 21:45:11.038645    7706 config.go:182] Loaded profile config "custom-flannel-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-507207 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jhfnx" [cb14ce1d-6ef4-421c-971a-70662ea34d1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jhfnx" [cb14ce1d-6ef4-421c-971a-70662ea34d1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004463847s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-507207 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (102.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-728530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-728530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m42.333188587s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (102.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-507207 "pgrep -a kubelet"
I1120 21:45:51.552005    7706 config.go:182] Loaded profile config "enable-default-cni-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-507207 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z287k" [89ce7da7-3cb8-4b0d-870c-1fe7fe6f76a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z287k" [89ce7da7-3cb8-4b0d-870c-1fe7fe6f76a9] Running
E1120 21:45:59.402185    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004425299s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-kmq7h" [f544274a-52d7-4bd7-ae87-1e39d93cbd9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005176393s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-507207 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-507207 "pgrep -a kubelet"
I1120 21:46:02.240353    7706 config.go:182] Loaded profile config "flannel-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-507207 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r25ct" [4138de32-6532-416c-8c95-6e58006e739b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r25ct" [4138de32-6532-416c-8c95-6e58006e739b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.142172968s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-507207 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-678625 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-678625 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m54.553018773s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-984469 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-984469 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m37.508564996s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-507207 "pgrep -a kubelet"
I1120 21:46:30.768074    7706 config.go:182] Loaded profile config "bridge-507207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-507207 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-26l9x" [7d89a280-6206-48a7-8d79-fc061625f3c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-26l9x" [7d89a280-6206-48a7-8d79-fc061625f3c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004490825s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-507207 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-507207 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-993261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-993261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m33.654594838s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-728530 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a5217c60-d49c-41fb-b18d-31d7bbc57c33] Pending
helpers_test.go:352: "busybox" [a5217c60-d49c-41fb-b18d-31d7bbc57c33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a5217c60-d49c-41fb-b18d-31d7bbc57c33] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005726248s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-728530 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-728530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-728530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.353318409s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-728530 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (82.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-728530 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-728530 --alsologtostderr -v=3: (1m22.960910307s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (82.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-984469 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7bbafb19-50f6-4cbc-a4cf-e1f09f88f29b] Pending
helpers_test.go:352: "busybox" [7bbafb19-50f6-4cbc-a4cf-e1f09f88f29b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7bbafb19-50f6-4cbc-a4cf-e1f09f88f29b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.007322303s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-984469 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-678625 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b55516fa-7906-4efd-a18e-7186004195c1] Pending
helpers_test.go:352: "busybox" [b55516fa-7906-4efd-a18e-7186004195c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b55516fa-7906-4efd-a18e-7186004195c1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005764859s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-678625 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-984469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-984469 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-984469 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-984469 --alsologtostderr -v=3: (1m25.472154346s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-678625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-678625 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-678625 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-678625 --alsologtostderr -v=3: (1m30.13030188s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993261 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [64c2f709-6b65-42b0-9962-c7312aae52f8] Pending
helpers_test.go:352: "busybox" [64c2f709-6b65-42b0-9962-c7312aae52f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1120 21:48:34.299779    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.306185    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.317553    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.339025    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.380513    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.461949    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.623615    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.805301    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.811690    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.823211    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.844693    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.886249    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.945776    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:34.967689    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:35.129417    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [64c2f709-6b65-42b0-9962-c7312aae52f8] Running
E1120 21:48:35.451189    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:35.587835    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:36.092906    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:36.869187    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:37.375041    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:39.430672    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:39.936630    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003992541s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993261 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-993261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-993261 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (71.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-993261 --alsologtostderr -v=3
E1120 21:48:44.552561    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:45.058600    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:54.793968    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:48:55.300116    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-993261 --alsologtostderr -v=3: (1m11.806274805s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (71.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-728530 -n old-k8s-version-728530
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-728530 -n old-k8s-version-728530: exit status 7 (59.59281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-728530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-728530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1120 21:49:02.633717    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:02.640204    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:02.651660    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:02.673057    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:02.714560    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:02.796132    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:02.957939    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:03.279713    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:03.401371    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:03.921251    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:05.203391    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:07.765390    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:12.887423    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:15.276095    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:15.781479    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:20.328324    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/functional-933412/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:23.128970    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:49:36.327598    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/addons-947553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-728530 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (45.496788113s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-728530 -n old-k8s-version-728530
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-984469 -n embed-certs-984469
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-984469 -n embed-certs-984469: exit status 7 (70.627209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-984469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-49ngd" [9845df14-c0e6-4119-8e42-1106ae845829] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-49ngd" [9845df14-c0e6-4119-8e42-1106ae845829] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.003992863s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-984469 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1120 21:49:43.611009    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-984469 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (50.114023448s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-984469 -n embed-certs-984469
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-678625 -n no-preload-678625
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-678625 -n no-preload-678625: exit status 7 (79.667271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-678625 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (70.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-678625 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-678625 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m10.501013992s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-678625 -n no-preload-678625
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (70.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261: exit status 7 (69.055241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-993261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (75.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-993261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1120 21:49:56.238371    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/kindnet-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-993261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.291636521s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (75.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-49ngd" [9845df14-c0e6-4119-8e42-1106ae845829] Running
E1120 21:49:56.742914    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/auto-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005161909s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-728530 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-728530 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-728530 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-728530 -n old-k8s-version-728530
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-728530 -n old-k8s-version-728530: exit status 2 (277.115548ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-728530 -n old-k8s-version-728530
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-728530 -n old-k8s-version-728530: exit status 2 (304.937883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-728530 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-728530 --alsologtostderr -v=1: (1.053508859s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-728530 -n old-k8s-version-728530
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-728530 -n old-k8s-version-728530
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (85.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202395 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1120 21:50:11.418645    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:11.425139    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:11.436577    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:11.458049    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:11.499565    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:11.581171    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:11.742816    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:12.065142    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:12.706696    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:13.988063    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:16.549520    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:21.671638    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:24.572466    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:31.913531    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202395 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.122953863s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (85.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mqlpm" [422e077a-d664-4dba-bcb7-3f5fae9bcfc2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005045898s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mqlpm" [422e077a-d664-4dba-bcb7-3f5fae9bcfc2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007526492s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-984469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-984469 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-984469 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-984469 --alsologtostderr -v=1: (1.346102328s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-984469 -n embed-certs-984469
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-984469 -n embed-certs-984469: exit status 2 (274.27685ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-984469 -n embed-certs-984469
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-984469 -n embed-certs-984469: exit status 2 (273.478447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-984469 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-984469 -n embed-certs-984469
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-984469 -n embed-certs-984469
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.55s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "df -t ext4 /data | grep /data"
E1120 21:50:51.870593    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:51.892691    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:51.934315    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:52.016146    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//data (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
E1120 21:50:53.143082    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
E1120 21:50:52.179332    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
E1120 21:50:52.395845    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:52.501580    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.23s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.21s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
--- PASS: TestISOImage/VersionJSON (0.21s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.21s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-304958 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.21s)
E1120 21:50:56.043289    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:56.049687    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:56.061192    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:56.082645    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:56.124076    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:56.205518    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:56.367296    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:56.688928    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:56.987394    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:57.331309    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:50:58.612612    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:01.174360    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:02.109119    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vv8q7" [e7d0b70a-443f-4ea6-bc04-f7e96e2b5d54] Running
E1120 21:51:06.295783    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004919276s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vv8q7" [e7d0b70a-443f-4ea6-bc04-f7e96e2b5d54] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005295875s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-678625 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hcp2h" [34fb9410-de75-4acf-b4cb-c0cfec7fc860] Running
E1120 21:51:12.351284    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004569549s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-678625 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-678625 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-678625 -n no-preload-678625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-678625 -n no-preload-678625: exit status 2 (231.991042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-678625 -n no-preload-678625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-678625 -n no-preload-678625: exit status 2 (224.759608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-678625 --alsologtostderr -v=1
E1120 21:51:16.537761    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-678625 -n no-preload-678625
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-678625 -n no-preload-678625
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hcp2h" [34fb9410-de75-4acf-b4cb-c0cfec7fc860] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004402578s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-993261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993261 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-993261 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261: exit status 2 (229.788901ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261: exit status 2 (235.189192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-993261 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-993261 -n default-k8s-diff-port-993261
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202395 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1120 21:51:32.325523    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/bridge-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:32.832712    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202395 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.133391368s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-202395 --alsologtostderr -v=3
E1120 21:51:33.357445    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/custom-flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:33.607216    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/bridge-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:36.168785    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/bridge-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:37.019207    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:41.290369    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/bridge-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-202395 --alsologtostderr -v=3: (10.572008446s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202395 -n newest-cni-202395
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202395 -n newest-cni-202395: exit status 7 (61.969659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-202395 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202395 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1120 21:51:46.494421    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/calico-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:51:51.532542    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/bridge-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:12.014886    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/bridge-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:13.794742    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/enable-default-cni-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:17.980676    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/flannel-507207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202395 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (36.736242798s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202395 -n newest-cni-202395
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-202395 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-202395 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202395 -n newest-cni-202395
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202395 -n newest-cni-202395: exit status 2 (206.840437ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202395 -n newest-cni-202395
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202395 -n newest-cni-202395: exit status 2 (209.927835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-202395 --alsologtostderr -v=1
E1120 21:52:21.975797    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/old-k8s-version-728530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:21.982292    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/old-k8s-version-728530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:21.993696    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/old-k8s-version-728530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:22.015266    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/old-k8s-version-728530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:22.056734    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/old-k8s-version-728530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:22.139179    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/old-k8s-version-728530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:52:22.301008    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/old-k8s-version-728530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202395 -n newest-cni-202395
E1120 21:52:22.622831    7706 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/old-k8s-version-728530/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202395 -n newest-cni-202395
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                    

Test skip (40/345)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
269 TestNetworkPlugins/group/kubenet 3.45
277 TestNetworkPlugins/group/cilium 4.01
284 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-947553 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-507207 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-507207" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:36:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.14:8443
name: cert-expiration-925075
contexts:
- context:
cluster: cert-expiration-925075
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:36:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-925075
name: cert-expiration-925075
current-context: ""
kind: Config
users:
- name: cert-expiration-925075
user:
client-certificate: /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/cert-expiration-925075/client.crt
client-key: /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/cert-expiration-925075/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-507207

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507207"

                                                
                                                
----------------------- debugLogs end: kubenet-507207 [took: 3.279938503s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-507207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-507207
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-507207 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-507207" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21923-3793/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:36:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.14:8443
name: cert-expiration-925075
contexts:
- context:
cluster: cert-expiration-925075
extensions:
- extension:
last-update: Thu, 20 Nov 2025 21:36:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-925075
name: cert-expiration-925075
current-context: ""
kind: Config
users:
- name: cert-expiration-925075
user:
client-certificate: /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/cert-expiration-925075/client.crt
client-key: /home/jenkins/minikube-integration/21923-3793/.minikube/profiles/cert-expiration-925075/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-507207

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-507207" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507207"

                                                
                                                
----------------------- debugLogs end: cilium-507207 [took: 3.848258554s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-507207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-507207
--- SKIP: TestNetworkPlugins/group/cilium (4.01s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-568251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-568251
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard